scholarly journals SGC-VSLAM: A Semantic and Geometric Constraints VSLAM for Dynamic Indoor Environments

Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2432
Author(s):  
Shiqiang Yang ◽  
Guohao Fan ◽  
Lele Bai ◽  
Cheng Zhao ◽  
Dexin Li

As one of the core technologies for autonomous mobile robots, Visual Simultaneous Localization and Mapping (VSLAM) has been widely researched in recent years. However, most state-of-the-art VSLAM adopts a strong scene rigidity assumption for analytical convenience, which limits the utility of these algorithms for real-world environments with independent dynamic objects. Hence, this paper presents a semantic and geometric constraints VSLAM (SGC-VSLAM), which is built on the RGB-D mode of ORB-SLAM2 with the addition of dynamic detection and static point cloud map construction modules. In detail, a novel improved quadtree-based method was adopted for SGC-VSLAM to enhance the performance of the feature extractor in ORB-SLAM (Oriented FAST and Rotated BRIEF-SLAM). Moreover, a new dynamic feature detection method called semantic and geometric constraints was proposed, which provided a robust and fast way to filter dynamic features. The semantic bounding box generated by YOLO v3 (You Only Look Once, v3) was used to calculate a more accurate fundamental matrix between adjacent frames, which was then used to filter all of the truly dynamic features. Finally, a static point cloud was estimated by using a new drawing key frame selection strategy. Experiments on the public TUM RGB-D (Red-Green-Blue Depth) dataset were conducted to evaluate the proposed approach. This evaluation revealed that the proposed SGC-VSLAM can effectively improve the positioning accuracy of the ORB-SLAM2 system in high-dynamic scenarios and was also able to build a map with the static parts of the real environment, which has long-term application value for autonomous mobile robots.

2021 ◽  
Vol 11 (4) ◽  
pp. 1953
Author(s):  
Francisco Martín ◽  
Fernando González ◽  
José Miguel Guerrero ◽  
Manuel Fernández ◽  
Jonatan Ginés

The perception and identification of visual stimuli from the environment is a fundamental capacity of autonomous mobile robots. Current deep learning techniques make it possible to identify and segment objects of interest in an image. This paper presents a novel algorithm to segment the object’s space from a deep segmentation of an image taken by a 3D camera. The proposed approach solves the boundary pixel problem that appears when a direct mapping from segmented pixels to their correspondence in the point cloud is used. We validate our approach by comparing baseline approaches using real images taken by a 3D camera, showing that our method outperforms their results in terms of accuracy and reliability. As an application of the proposed algorithm, we present a semantic mapping approach for a mobile robot’s indoor environments.


IEEE Access ◽  
2018 ◽  
Vol 6 ◽  
pp. 31665-31676 ◽  
Author(s):  
Francisco A. X. Da Mota ◽  
Matheus Xavier Rocha ◽  
Joel J. P. C. Rodrigues ◽  
Victor Hugo C. De Albuquerque ◽  
Auzuir Ripardo De Alexandria

2017 ◽  
Vol 29 (5) ◽  
pp. 928-934
Author(s):  
Kiyoaki Takahashi ◽  
◽  
Takafumi Ono ◽  
Tomokazu Takahashi ◽  
Masato Suzuki ◽  
...  

Autonomous mobile robots need to acquire surrounding environmental information based on which they perform their self-localizations. Current autonomous mobile robots often use point cloud data acquired by laser range finders (LRFs) instead of image data. In the virtual robot autonomous traveling tests we have conducted in this study, we have evaluated the robot’s self-localization performance on Normal Distributions Transform (NDT) scan matching. This was achieved using 2D and 3D point cloud data to assess whether they perform better self-localizations in case of using 3D or 2D point cloud data.


Sign in / Sign up

Export Citation Format

Share Document