An automatic technique for presentation of coincident‐loop, impulse‐response, transient, electromagnetic data

Geophysics ◽  
1994 ◽  
Vol 59 (10) ◽  
pp. 1542-1550 ◽  
Author(s):  
Richard S. Smith ◽  
R. N. Edwards ◽  
G. Buselli

Coincident‐loop TEM sounding data are often presented by plotting the half‐space apparent conductivity as a function of delay time. A new algorithm generates an improved presentation that plots the apparent conductivity as a function of depth. The resulting data may be further processed to sharpen or “spike” the smoothly varying apparent‐conductivity/depth curves in an attempt to better represent the rapid changes in conductivity that often exist in the earth. The algorithm described involves an approximation, but is simple, easy to use, and computationally efficient. A layered conductivity structure is assumed, so the algorithm is best for areas where the geology is approximately horizontal. However, the algorithm can also be used to identify anomalous features that are not infinite horizontal layers. The spiked conductivity models derived from synthetic data are consistent with the original layered‐earth models and show a greater resolution than the apparent‐conductivity/depth curves, and sometimes amplify noise in the data. When data are collected along a profile line, the conductivity/depth information can be converted to a color image. For profile data collected over the Elura orebody, the image of the spiked conductivity section shows an anomalous feature at the orebody, and the color contrast is more marked than it is on the apparent‐conductivity/depth image.

Sensors ◽  
2020 ◽  
Vol 20 (14) ◽  
pp. 3816
Author(s):  
Tao Wang ◽  
Yuanzheng Cai ◽  
Lingyu Liang ◽  
Dongyi Ye

We address the problem of localizing waste objects from a color image and an optional depth image, which is a key perception component for robotic interaction with such objects. Specifically, our method integrates the intensity and depth information at multiple levels of spatial granularity. Firstly, a scene-level deep network produces an initial coarse segmentation, based on which we select a few potential object regions to zoom in and perform fine segmentation. The results of the above steps are further integrated into a densely connected conditional random field that learns to respect the appearance, depth, and spatial affinities with pixel-level accuracy. In addition, we create a new RGBD waste object segmentation dataset, MJU-Waste, that is made public to facilitate future research in this area. The efficacy of our method is validated on both MJU-Waste and the Trash Annotation in Context (TACO) dataset.


2021 ◽  
Vol 40 (3) ◽  
pp. 1-12
Author(s):  
Hao Zhang ◽  
Yuxiao Zhou ◽  
Yifei Tian ◽  
Jun-Hai Yong ◽  
Feng Xu

Reconstructing hand-object interactions is a challenging task due to strong occlusions and complex motions. This article proposes a real-time system that uses a single depth stream to simultaneously reconstruct hand poses, object shape, and rigid/non-rigid motions. To achieve this, we first train a joint learning network to segment the hand and object in a depth image, and to predict the 3D keypoints of the hand. With most layers shared by the two tasks, computation cost is saved for the real-time performance. A hybrid dataset is constructed here to train the network with real data (to learn real-world distributions) and synthetic data (to cover variations of objects, motions, and viewpoints). Next, the depth of the two targets and the keypoints are used in a uniform optimization to reconstruct the interacting motions. Benefitting from a novel tangential contact constraint, the system not only solves the remaining ambiguities but also keeps the real-time performance. Experiments show that our system handles different hand and object shapes, various interactive motions, and moving cameras.


2015 ◽  
Vol 15 (12) ◽  
pp. 2703-2713 ◽  
Author(s):  
C. Melchiorre ◽  
A. Tryggvason

Abstract. We refine and test an algorithm for landslide susceptibility assessment in areas with sensitive clays. The algorithm uses soil data and digital elevation models to identify areas which may be prone to landslides and has been applied in Sweden for several years. The algorithm is very computationally efficient and includes an intelligent filtering procedure for identifying and removing small-scale artifacts in the hazard maps produced. Where information on bedrock depth is available, this can be included in the analysis, as can information on several soil-type-based cross-sectional angle thresholds for slip. We evaluate how processing choices such as of filtering parameters, local cross-sectional angle thresholds, and inclusion of bedrock depth information affect model performance. The specific cross-sectional angle thresholds used were derived by analyzing the relationship between landslide scarps and the quick-clay susceptibility index (QCSI). We tested the algorithm in the Göta River valley. Several different verification measures were used to compare results with observed landslides and thereby identify the optimal algorithm parameters. Our results show that even though a relationship between the cross-sectional angle threshold and the QCSI could be established, no significant improvement of the overall modeling performance could be achieved by using these geographically specific, soil-based thresholds. Our results indicate that lowering the cross-sectional angle threshold from 1 : 10 (the general value used in Sweden) to 1 : 13 improves results slightly. We also show that an application of the automatic filtering procedure that removes areas initially classified as prone to landslides not only removes artifacts and makes the maps visually more appealing, but it also improves the model performance.


Author(s):  
Hyun Jun Park ◽  
Kwang Baek Kim

<p><span>Intel RealSense depth camera provides depth image using infrared projector and infrared camera. Using infrared radiation makes it possible to measure the depth with high accuracy, but the shadow of infrared radiation makes depth unmeasured regions. Intel RealSense SDK provides a postprocessing algorithm to correct it. However, this algorithm is not enough to be used and needs to be improved. Therefore, we propose a method to correct the depth image using image processing techniques. The proposed method corrects the depth using the adjacent depth information. Experimental results showed that the proposed method corrects the depth image more accurately than the Intel RealSense SDK.</span></p>


2013 ◽  
Vol 28 (2) ◽  
pp. 297-315 ◽  
Author(s):  
Steven M. Lazarus ◽  
Samuel T. Wilson ◽  
Michael E. Splitt ◽  
Gary A. Zarillo

Abstract A computationally efficient method of producing tropical cyclone (TC) wind analyses is developed and tested, using a hindcast methodology, for 12 Gulf of Mexico storms. The analyses are created by blending synthetic data, generated from a simple parametric model constructed using extended best-track data and climatology, with a first-guess field obtained from the NCEP–NCAR North American Regional Reanalysis (NARR). Tests are performed whereby parameters in the wind analysis and vortex model are varied in an attempt to best represent the TC wind fields. A comparison between nonlinear and climatological estimates of the TC size parameter indicates that the former yields a much improved correlation with the best-track radius of maximum wind rm. The analysis, augmented by a pseudoerror term that controls the degree of blending between the NARR and parametric winds, is tuned using buoy observations to calculate wind speed root-mean-square deviation (RMSD), scatter index (SI), and bias. The bias is minimized when the parametric winds are confined to the inner-core region. Analysis wind statistics are stratified within a storm-relative reference frame and by radial distance from storm center, storm intensity, radius of maximum wind, and storm translation speed. The analysis decreases the bias and RMSD in all quadrants for both moderate and strong storms and is most improved for storms with an rm of less than 20 n mi. The largest SI reductions occur for strong storms and storms with an rm of less than 20 n mi. The NARR impacts the analysis bias: when the bias in the former is relatively large, it remains so in the latter.


Sensors ◽  
2019 ◽  
Vol 19 (2) ◽  
pp. 393 ◽  
Author(s):  
Jonha Lee ◽  
Dong-Wook Kim ◽  
Chee Won ◽  
Seung-Won Jung

Segmentation of human bodies in images is useful for a variety of applications, including background substitution, human activity recognition, security, and video surveillance applications. However, human body segmentation has been a challenging problem, due to the complicated shape and motion of a non-rigid human body. Meanwhile, depth sensors with advanced pattern recognition algorithms provide human body skeletons in real time with reasonable accuracy. In this study, we propose an algorithm that projects the human body skeleton from a depth image to a color image, where the human body region is segmented in the color image by using the projected skeleton as a segmentation cue. Experimental results using the Kinect sensor demonstrate that the proposed method provides high quality segmentation results and outperforms the conventional methods.


Sensors ◽  
2019 ◽  
Vol 19 (17) ◽  
pp. 3784 ◽  
Author(s):  
Jameel Malik ◽  
Ahmed Elhayek ◽  
Didier Stricker

Hand shape and pose recovery is essential for many computer vision applications such as animation of a personalized hand mesh in a virtual environment. Although there are many hand pose estimation methods, only a few deep learning based algorithms target 3D hand shape and pose from a single RGB or depth image. Jointly estimating hand shape and pose is very challenging because none of the existing real benchmarks provides ground truth hand shape. For this reason, we propose a novel weakly-supervised approach for 3D hand shape and pose recovery (named WHSP-Net) from a single depth image by learning shapes from unlabeled real data and labeled synthetic data. To this end, we propose a novel framework which consists of three novel components. The first is the Convolutional Neural Network (CNN) based deep network which produces 3D joints positions from learned 3D bone vectors using a new layer. The second is a novel shape decoder that recovers dense 3D hand mesh from sparse joints. The third is a novel depth synthesizer which reconstructs 2D depth image from 3D hand mesh. The whole pipeline is fine-tuned in an end-to-end manner. We demonstrate that our approach recovers reasonable hand shapes from real world datasets as well as from live stream of depth camera in real-time. Our algorithm outperforms state-of-the-art methods that output more than the joint positions and shows competitive performance on 3D pose estimation task.


2006 ◽  
Vol 37 (4) ◽  
pp. 348-354 ◽  
Author(s):  
R. Schaa ◽  
J.E. Reid ◽  
P.K. Fullagar

Sign in / Sign up

Export Citation Format

Share Document