scholarly journals A Phase-Preserving Focusing Technique for TOPS Mode SAR Raw Data Based on Conventional Processing Methods

Sensors ◽  
2019 ◽  
Vol 19 (15) ◽  
pp. 3321
Author(s):  
Adele Fusco ◽  
Antonio Pepe ◽  
Paolo Berardino ◽  
Claudio De Luca ◽  
Sabatino Buonanno ◽  
...  

We present a new solution for the phase-preserving focusing of synthetic aperture radar (SAR) raw data acquired through the Terrain Observation with Progressive Scan (TOPS) mode. The proposed algorithm consists of a first interpolation stage of the TOPS raw data, which takes into account the Doppler Centroid frequency variations due to the azimuth antenna steering function, and allows us to unfold the azimuth spectra of the TOPS raw data. Subsequently, the interpolated signals are processed by using conventional phase-preserving SAR focusing methods that exploit frequency domain and spectral analyses algorithms, which are extensively used to efficiently process Stripmap and ScanSAR data. Accordingly, the developed focusing approach is easy to implement. In particular, the presented focusing approach exploits one of the available frequency domain Stripmap processing techniques. The only modification is represented by the inclusion, within the 2D frequency domain focusing step, of a spurious azimuth chirp signal with a properly selected azimuthal rate. This allows us to efficiently carry out the TOPS azimuth focusing through the SPECAN method. Furthermore, an important aspect of this algorithm is the possibility to easily achieve a constant and tunable output azimuth pixel size without any additional computing time; this is a remarkable feature with respect to the full-aperture TOPS-mode algorithms available in the existing literature. Moreover, although tailored on Sentinel-1 (S1) raw data, the proposed algorithm can be easily extended to process data collected through the TOPS mode by different radar sensors. The presented experimental results have been obtained by processing real Sentinel-1 raw data and confirm the effectiveness of the proposed algorithm.

Author(s):  
Robert P. Harrison ◽  
Paul R. Stuart

Multivariate Analysis (MVA), a statistical design tool for dealing with very large datasets, was applied to historical data from a Thermo-Mechanical Pulp (TMP) newsprint mill in Eastern Canada. Partial Least Squares (PLS) type MVA models were created to identify significant correlations between operating parameters in the woodchip refining section and variations in pulp quality. Understanding these relationships is of crucial importance to any eventual retrofit design for this process. This paper focusses on pre-selecting and pre-treating the raw process data, including infrequently measured variables, to maximize the realism and usefulness of the MVA black-box models. Key methods explored were ways of selecting low-production periods for removal, techniques for identifying and eliminating major outliers using MVA outputs, and noise filtering. A major conclusion of this work was that the PLS models were significantly improved by pre-treating the data. This paper recommends an overall design approach for applying MVA to industrial operating data, involving stringent removal of dubious periods of operation such as aberrant process behaviour, and an aggressive Exponentially Weighted Moving Average (EWMA) filtering of all dependent and independent variables.


2017 ◽  
Vol 10 (1) ◽  
Author(s):  
Oliver Hein ◽  
Wolfgang H. Zangemeister

Recent years have witnessed a remarkable growth in the way mathematics, informatics, and computer science can process data. In disciplines such as machine learning, pattern recognition, computer vision, computational neurology, molecular biology, information retrieval, etc., many new methods have been developed to cope with the ever increasing amount and complexity of the data. These new methods offer interesting possibilities for processing, classifying and interpreting eye-tracking data. The present paper exemplifies the application of topological arguments to improve the evaluation of eye-tracking data. The task of classifying raw eye-tracking data into saccades and fixations, with a single, simple as well as intuitive argument, described as coherence of spacetime, is discussed, and the hierarchical ordering of the fixations into dwells is shown. The method, namely identification by topological characteristics (ITop), is parameter-free and needs no pre-processing and post-processing of the raw data. The general and robust topological argument is easy to expand into complexsettings of higher visual tasks, making it possible to identify visual strategies. As supplementary file an interactive demonstration of the method can be downloaded,


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5294
Author(s):  
Pau Muñoz-Benavent ◽  
Vicente Puig-Pons ◽  
Gabriela Andreu-García ◽  
Víctor Espinosa ◽  
Vicente Atienza-Vanacloig ◽  
...  

A proposal is described for an underwater sensor combining an acoustic device with an optical one to automatically size juvenile bluefin tuna from a ventral perspective. Acoustic and optical information is acquired when the tuna are swimming freely and the fish cross our combined sensor’s field of view. Image processing techniques are used to identify and classify fish traces in acoustic data (echogram), while the video frames are processed by fitting a deformable model of the fishes’ ventral silhouette. Finally, the fish are sized combining the processed acoustic and optical data, once the correspondence between the two kinds of data is verified. The proposed system is able to automatically give accurate measurements of the tuna’s Snout-Fork Length (SFL) and width. In comparison with our previously validated automatic sizing procedure with stereoscopic vision, this proposal improves the samples per hour of computing time by 7.2 times in a tank with 77 juveniles of Atlantic bluefin tuna (Thunnus thynnus), without compromising the accuracy of the measurements. This work validates the procedure for combining acoustic and optical data for fish sizing and is the first step towards an embedded sensor, whose electronics and processing capabilities should be optimized to be autonomous in terms of the power supply and to enable real-time processing.


2019 ◽  
Vol 16 (5) ◽  
pp. 1001-1014 ◽  
Author(s):  
Zi-Ying Wang ◽  
Jian-Ping Huang ◽  
Ding-Jin Liu ◽  
Zhen-Chun Li ◽  
Peng Yong ◽  
...  

Abstract Full-waveform inversion (FWI) is a powerful tool to reconstruct subsurface geophysical parameters with high resolution. As 3D surveys become widely implemented, corresponding 3D processing techniques are required to solve complex geological cases, while a large amount of computation is the most challenging problem. We propose an adaptive variable-grid 3D FWI on graphics processing unit devices to improve computational efficiency without losing accuracy. The irregular-grid discretization strategy is based on a dispersion relation, and the grid size adapts to depth, velocity, and frequency automatically. According to the transformed grid coordinates, we derive a modified acoustic wave equation and apply it to full wavefield simulation. The 3D variable-grid modeling is conducted on several 3D models to validate its feasibility, accuracy and efficiency. Then we apply the proposed modeling method to full-waveform inversion for source and residual wavefield propagation. It is demonstrated that the adaptive variable-grid FWI is capable of decreasing computing time and memory requirements. From the inversion results of the 3D SEG/EAGE overthrust model, our method retains inversion accuracy when recovering both thrust and channels.


Geophysics ◽  
2016 ◽  
Vol 81 (2) ◽  
pp. E113-E128 ◽  
Author(s):  
Jianhui Li ◽  
Colin G. Farquharson ◽  
Xiangyun Hu

The inverse Laplace transform is one of the methods used to obtain time-domain electromagnetic (EM) responses in geophysics. The Gaver-Stehfest algorithm has so far been the most popular technique to compute the Laplace transform in the context of transient electromagnetics. However, the accuracy of the Gaver-Stehfest algorithm, even when using double-precision arithmetic, is relatively low at late times due to round-off errors. To overcome this issue, we have applied variable-precision arithmetic in the MATLAB computing environment to an implementation of the Gaver-Stehfest algorithm. This approach has proved to be effective in terms of improving accuracy, but it is computationally expensive. In addition, the Gaver-Stehfest algorithm is significantly problem dependent. Therefore, we have turned our attention to two other algorithms for computing inverse Laplace transforms, namely, the Euler and Talbot algorithms. Using as examples the responses for central-loop, fixed-loop, and horizontal electric dipole sources for homogeneous and layered mediums, these two algorithms, implemented using normal double-precision arithmetic, have been shown to provide more accurate results and to be less problem dependent than the standard Gaver-Stehfest algorithm. Furthermore, they have the capacity for yielding more accurate time-domain responses than the cosine and sine transforms for which the frequency-domain responses are obtained by interpolation between a limited number of explicitly computed frequency-domain responses. In addition, the Euler and Talbot algorithms have the potential of requiring fewer Laplace- or frequency-domain function evaluations than do the other transform methods commonly used to compute time-domain EM responses, and thus of providing a more efficient option.


2019 ◽  
Vol 11 (21) ◽  
pp. 2544
Author(s):  
He ◽  
Zhang ◽  
Yi ◽  
Jin ◽  
Dong

The use of electronically steered antennas in the azimuth dimension typically leads to a staircase-like antenna beam steering law in the Terrain Observation by Progressive Scan (TOPS) wide-swath synthetic aperture radar (SAR) data acquisition mode, which will introduce paired echoes in the focused images. This paper proposes a new approach for removing such paired echoes from TOPS SAR images based on the generalization of the ideal optimum filtering concept, which can be implemented easily in the SAR data processing. Modeling the amplitude-modulated azimuth signal shows that the absolute phase of the introduced paired echoes cannot be determined due to the random rotation angle jump time for each target, which will prevent the precise use of optimum filtering. An extended optimum filtering approach, which is originally proposed for suppressing the azimuth ambiguities in SAR images, is reintroduced in this particular case, and a new approximated and generalized form of the deconvolving filtering in the approach is redefined to accommodate the undetermined phase for both the strongest paired distortion peaks and the other peripheral peaks in the distorted impulse response function (IRF). Simulated data from a TOPS SAR mode with staircase-like beam steering are used to verify the improvement in image quality by using the new method.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Guowei Zhang ◽  
Jinrui Wang ◽  
Baokun Han ◽  
Sixiang Jia ◽  
Xiaoyu Wang ◽  
...  

Increased attention has been paid to research on intelligent fault diagnosis under acoustic signals. However, the signal-to-noise ratio of acoustic signals is much lower than vibration signals, which increases the difficulty of signal denoising and feature extraction. To solve the above defect, a novel batch-normalized deep sparse filtering (DSF) method is proposed to diagnose the fault through the acoustic signals of rotating machinery. In the first stage, the collected acoustic signals are prenormalized to eliminate the adverse effects of singular samples, and then the normalized signal is transformed into frequency-domain signal through fast Fourier transform (FFT). In the second stage, the learned features are obtained by training batch-normalized DSF with frequency-domain signals, and then the features are fine-tuned by backpropagation (BP) algorithm. In the third stage, softmax regression is used as a classifier for heath condition recognition based on the fine-tuned features. Bearing and planetary gear datasets are used to validate the diagnostic performance of the proposed method. The results show that the proposed DSF model can extract more powerful features and less computing time than other traditional methods.


2012 ◽  
Vol 572 ◽  
pp. 210-214
Author(s):  
Jun Wang ◽  
Jian Zhong Xu ◽  
Guo Dong Wang ◽  
Zhi Ping Yan

During the past two decades, progress has been achieved for the plate production in various areas, from the level of equipment to the processing techniques, and from the product size to the quality of the products as well. However, plate camber which may affect the stability of rolling has been one of those thorny problems due to the complexity and variation of plate camber. Analysis of the factors which may lead to plate camber, including slab wedge, temperature uniformity across the plate width, side-guide alignment, stiffness difference at two sides of the mill and the inclination of the roll gap, the features of the plate camber have been studied. Using measurement and quantitative analysis of plate camber, process data analysis of the plate, equipment monitoring and operation adjustment, a systematical diagnostic strategy of plate camber has been carried out. It has shown that the developed diagnostic strategy is satisfactory in one domestic Plate Mill.


Sign in / Sign up

Export Citation Format

Share Document