Using 6 DOF vision-inertial tracking to evaluate and improve low cost depth sensor based SLAM

Author(s):  
Thomas Calloway ◽  
Dalila B. Megherbi
Keyword(s):  
Low Cost ◽  
2020 ◽  
Vol 6 (3) ◽  
pp. 11
Author(s):  
Naoyuki Awano

Depth sensors are important in several fields to recognize real space. However, there are cases where most depth values in a depth image captured by a sensor are constrained because the depths of distal objects are not always captured. This often occurs when a low-cost depth sensor or structured-light depth sensor is used. This also occurs frequently in applications where depth sensors are used to replicate human vision, e.g., when using the sensors in head-mounted displays (HMDs). One ideal inpainting (repair or restoration) approach for depth images with large missing areas, such as partial foreground depths, is to inpaint only the foreground; however, conventional inpainting studies have attempted to inpaint entire images. Thus, under the assumption of an HMD-mounted depth sensor, we propose a method to inpaint partially and reconstruct an RGB-D depth image to preserve foreground shapes. The proposed method is comprised of a smoothing process for noise reduction, filling defects in the foreground area, and refining the filled depths. Experimental results demonstrate that the inpainted results produced using the proposed method preserve object shapes in the foreground area with accurate results of the inpainted area with respect to the real depth with the peak signal-to-noise ratio metric.


Sensors ◽  
2018 ◽  
Vol 18 (8) ◽  
pp. 2574 ◽  
Author(s):  
Jesus Monroy-Anieva ◽  
Cyril Rouviere ◽  
Eduardo Campos-Mercado ◽  
Tomas Salgado-Jimenez ◽  
Luis Garcia-Valdovinos

This work describes the modeling, control and development of a low cost Micro Autonomous Underwater Vehicle (μ-AUV), named AR2D2. The main objective of this work is to make the vehicle to detect and follow an object with defined color by means of the readings of a depth sensor and the information provided by an artificial vision system. A nonlinear PD (Proportional-Derivative) controller is implemented on the vehicle in order to stabilize the heave and surge movements. A formal stability proof of the closed-loop system using Lyapunov’s theory is given. Furthermore, the performance of the μ-AUV is validated through numerical simulations in MatLab and real-time experiments.


2015 ◽  
Vol 100 ◽  
pp. 55-62 ◽  
Author(s):  
Akihiro Nakamura ◽  
Hiroyuki Funaya ◽  
Naohiro Uezono ◽  
Kinichi Nakashima ◽  
Yasumasa Ishida ◽  
...  

2014 ◽  
Author(s):  
Timo Breuer ◽  
Christoph Bodensteiner ◽  
Michael Arens

2017 ◽  
Vol 17 (2) ◽  
pp. 40-45
Author(s):  
Ludovic David ◽  
Guillaume Bouyer ◽  
Samir Otmane

Virtual reality technologies have been experimented for several years for post-stroke motor rehabilitation, but there is too little diffusion of these systems among medical facilities and none among patients. Our objective is the development of an interactive system to assist motor rehabilitation of the upper limb after a stroke, which retains the medical benefits of traditional post-stroke methods while reducing human costs (usable with minimal supervision) and materials (general public), and facilitating active patient participation. System architecture, 3D interactions and virtual content are based on an iterative, user-centered design methodology with patients and therapists. The system allows users to perform repetitive and intensive tasks with the upper limb. The paretic hand is tracked with a low-cost depth sensor. Kinematic performance is monitored and visual feedbacks are proposed. Preliminary tests were conducted on a non-immersive prototype, with eight patients and a target pointing task. The results showed good usability and high acceptance from the users.


Author(s):  
Quentin Kevin Gautier ◽  
Thomas G. Garrison ◽  
Ferrill Rushton ◽  
Nicholas Bouck ◽  
Eric Lo ◽  
...  

PurposeDigital documentation techniques of tunneling excavations at archaeological sites are becoming more common. These methods, such as photogrammetry and LiDAR (Light Detection and Ranging), are able to create precise three-dimensional models of excavations to complement traditional forms of documentation with millimeter to centimeter accuracy. However, these techniques require either expensive pieces of equipment or a long processing time that can be prohibitive during short field seasons in remote areas. This article aims to determine the effectiveness of various low-cost sensors and real-time algorithms to create digital scans of archaeological excavations.Design/methodology/approachThe authors used a class of algorithms called SLAM (Simultaneous Localization and Mapping) along with depth-sensing cameras. While these algorithms have largely improved over recent years, the accuracy of the results still depends on the scanning conditions. The authors developed a prototype of a scanning device and collected 3D data at a Maya archaeological site and refined the instrument in a system of natural caves. This article presents an analysis of the resulting 3D models to determine the effectiveness of the various sensors and algorithms employed.FindingsWhile not as accurate as commercial LiDAR systems, the prototype presented, employing a time-of-flight depth sensor and using a feature-based SLAM algorithm, is a rapid and effective way to document archaeological contexts at a fraction of the cost.Practical implicationsThe proposed system is easy to deploy, provides real-time results and would be particularly useful in salvage operations as well as in high-risk areas where cultural heritage is threatened.Originality/valueThis article compares many different low-cost scanning solutions for underground excavations, along with presenting a prototype that can be easily replicated for documentation purposes.


Sensors ◽  
2018 ◽  
Vol 18 (8) ◽  
pp. 2711 ◽  
Author(s):  
Peikui Huang ◽  
Xiwen Luo ◽  
Jian Jin ◽  
Liangju Wang ◽  
Libo Zhang ◽  
...  

Hyperspectral sensors, especially the close-range hyperspectral camera, have been widely introduced to detect biological processes of plants in the high-throughput phenotyping platform, to support the identification of biotic and abiotic stress reactions at an early stage. However, the complex geometry of plants and their interaction with the illumination, severely affects the spectral information obtained. Furthermore, plant structure, leaf area, and leaf inclination distribution are critical indexes which have been widely used in multiple plant models. Therefore, the process of combination between hyperspectral images and 3D point clouds is a promising approach to solve these problems and improve the high-throughput phenotyping technique. We proposed a novel approach fusing a low-cost depth sensor and a close-range hyperspectral camera, which extended hyperspectral camera ability with 3D information as a potential tool for high-throughput phenotyping. An exemplary new calibration and analysis method was shown in soybean leaf experiments. The results showed that a 0.99 pixel resolution for the hyperspectral camera and a 3.3 millimeter accuracy for the depth sensor, could be achieved in a controlled environment using the method proposed in this paper. We also discussed the new capabilities gained using this new method, to quantify and model the effects of plant geometry and sensor configuration. The possibility of 3D reflectance models can be used to minimize the geometry-related effects in hyperspectral images, and to significantly improve high-throughput phenotyping. Overall results of this research, indicated that the proposed method provided more accurate spatial and spectral plant information, which helped to enhance the precision of biological processes in high-throughput phenotyping.


Sign in / Sign up

Export Citation Format

Share Document