scholarly journals Deep learning-based high-frequency source depth estimation using a single sensor

2021 ◽  
Vol 149 (3) ◽  
pp. 1454-1465
Author(s):  
Seunghyun Yoon ◽  
Haesang Yang ◽  
Woojae Seong
1984 ◽  
Vol 74 (5) ◽  
pp. 1623-1643
Author(s):  
Falguni Roy

Abstract A depth estimation procedure has been described which essentially attempts to identify depth phases by analyzing multi-station waveform data (hereafter called level II data) in various ways including deconvolution, prediction error filtering, and spectral analysis of the signals. In the absence of such observable phases, other methods based on S-P, ScS-P, and SKS-P travel times are tried to get an estimate of the source depth. The procedure was applied to waveform data collected from 31 globally distributed stations for the period between 1 and 15 October 1980. The digital data were analyzed at the temporary data center facilities of the National Defense Research Institute, Stockholm, Sweden. During this period, a total number of 162 events in the magnitude range 3.5 to 6.2 were defined by analyzing first arrival time data (hereafter called level I data) alone. For 120 of these events, it was possible to estimate depths using the present procedure. The applicability of the procedure was found to be 100 per cent for the events with mb > 4.8 and 88 per cent for the events with mb > 4. A comparison of level I depths and level II depths (the depths as obtained from level I and level II data, respectively) with that of the United States Geological Survey estimates indicated that it will be necessary to have at least one local station (Δ < 10°) among the level I data to obtain reasonable depth estimates from such data alone. Further, it has been shown that S wave travel times could be successfully utilized for the estimation of source depth.


2018 ◽  
Vol 8 (8) ◽  
pp. 1258 ◽  
Author(s):  
Shuming Jiao ◽  
Zhi Jin ◽  
Chenliang Chang ◽  
Changyuan Zhou ◽  
Wenbin Zou ◽  
...  

It is a critical issue to reduce the enormous amount of data in the processing, storage and transmission of a hologram in digital format. In photograph compression, the JPEG standard is commonly supported by almost every system and device. It will be favorable if JPEG standard is applicable to hologram compression, with advantages of universal compatibility. However, the reconstructed image from a JPEG compressed hologram suffers from severe quality degradation since some high frequency features in the hologram will be lost during the compression process. In this work, we employ a deep convolutional neural network to reduce the artifacts in a JPEG compressed hologram. Simulation and experimental results reveal that our proposed “JPEG + deep learning” hologram compression scheme can achieve satisfactory reconstruction results for a computer-generated phase-only hologram after compression.


Author(s):  
L. Madhuanand ◽  
F. Nex ◽  
M. Y. Yang

Abstract. Depth is an essential component for various scene understanding tasks and for reconstructing the 3D geometry of the scene. Estimating depth from stereo images requires multiple views of the same scene to be captured which is often not possible when exploring new environments with a UAV. To overcome this monocular depth estimation has been a topic of interest with the recent advancements in computer vision and deep learning techniques. This research has been widely focused on indoor scenes or outdoor scenes captured at ground level. Single image depth estimation from aerial images has been limited due to additional complexities arising from increased camera distance, wider area coverage with lots of occlusions. A new aerial image dataset is prepared specifically for this purpose combining Unmanned Aerial Vehicles (UAV) images covering different regions, features and point of views. The single image depth estimation is based on image reconstruction techniques which uses stereo images for learning to estimate depth from single images. Among the various available models for ground-level single image depth estimation, two models, 1) a Convolutional Neural Network (CNN) and 2) a Generative Adversarial model (GAN) are used to learn depth from aerial images from UAVs. These models generate pixel-wise disparity images which could be converted into depth information. The generated disparity maps from these models are evaluated for its internal quality using various error metrics. The results show higher disparity ranges with smoother images generated by CNN model and sharper images with lesser disparity range generated by GAN model. The produced disparity images are converted to depth information and compared with point clouds obtained using Pix4D. It is found that the CNN model performs better than GAN and produces depth similar to that of Pix4D. This comparison helps in streamlining the efforts to produce depth from a single aerial image.


Sign in / Sign up

Export Citation Format

Share Document