scholarly journals Distributed Monocular SLAM for Indoor Map Building

2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Ruwan Egodagamage ◽  
Mihran Tuceryan

Utilization and generation of indoor maps are critical elements in accurate indoor tracking. Simultaneous Localization and Mapping (SLAM) is one of the main techniques for such map generation. In SLAM an agent generates a map of an unknown environment while estimating its location in it. Ubiquitous cameras lead to monocular visual SLAM, where a camera is the only sensing device for the SLAM process. In modern applications, multiple mobile agents may be involved in the generation of such maps, thus requiring a distributed computational framework. Each agent can generate its own local map, which can then be combined into a map covering a larger area. By doing so, they can cover a given environment faster than a single agent. Furthermore, they can interact with each other in the same environment, making this framework more practical, especially for collaborative applications such as augmented reality. One of the main challenges of distributed SLAM is identifying overlapping maps, especially when relative starting positions of agents are unknown. In this paper, we are proposing a system having multiple monocular agents, with unknown relative starting positions, which generates a semidense global map of the environment.

2016 ◽  
Vol 04 (02) ◽  
pp. 155-165 ◽  
Author(s):  
A. Torres-González ◽  
J. R. Martinez-de Dios ◽  
A. Jiménez-Cano ◽  
A. Ollero

This paper deals with 3D Simultaneous Localization and Mapping (SLAM), where the UAS uses only range measurements to build a local map of an unknown environment and to self-localize in that map. In the recent years Range Only (RO) SLAM has attracted significant interest, it is suitable for non line-of-sight conditions and bad lighting, being superior to visual SLAM in some problems. However, some issues constrain its applicability in practical cases, such as delays in map building and low map and UAS estimation accuracies. This paper proposes a 3D RO-SLAM scheme for UAS that specifically focuses on improving map building delays and accuracy levels without compromising efficiency in the consumption of resources. The scheme integrates sonar measurements together with range measurements between the robot and beacons deployed in the scenario. The proposed scheme presents two main advantages: (1) it integrates direct range measurements between the robot and the beacons and also range measurements between beacons — called inter-beacon measurements — which significantly reduce map building times and improve map and UAS localization accuracies; and (2) the SLAM scheme is endowed with a supervisory module that self-adapts the measurements that are integrated in SLAM reducing computational, bandwidth and energy consumption. Experimental validation in field experiments with an octorotor UAS showed that the proposed scheme improved map building times in 72%, map accuracy in 40% and UAS localization accuracy in 12%.


Sensors ◽  
2019 ◽  
Vol 19 (10) ◽  
pp. 2230 ◽  
Author(s):  
Su Wang ◽  
Yukinori Kobayashi ◽  
Ankit A. Ravankar ◽  
Abhijeet Ravankar ◽  
Takanori Emaru

Scale ambiguity and drift are inherent drawbacks of a pure-visual monocular simultaneous localization and mapping (SLAM) system. This problem could be a crucial challenge for other robots with range sensors to perform localization in a map previously built by a monocular camera. In this paper, a metrically inconsistent priori map is made by the monocular SLAM that is subsequently used to perform localization on another robot only using a laser range finder (LRF). To tackle the problem of the metric inconsistency, this paper proposes a 2D-LRF-based localization algorithm which allows the robot to locate itself and resolve the scale of the local map simultaneously. To align the data from 2D LRF to the map, 2D structures are extracted from the 3D point cloud map obtained by the visual SLAM process. Next, a modified Monte Carlo localization (MCL) approach is proposed to estimate the robot’s state which is composed of both the robot’s pose and map’s relative scale. Finally, the effectiveness of the proposed system is demonstrated in the experiments on a public benchmark dataset as well as in a real-world scenario. The experimental results indicate that the proposed method is able to globally localize the robot in real-time. Additionally, even in a badly drifted map, the successful localization can also be achieved.


2011 ◽  
Vol 2011 ◽  
pp. 1-10 ◽  
Author(s):  
Michal Jama ◽  
Dale Schinstock

This work presents a vision based system for navigation on a vertical takeoff and landing unmanned aerial vehicle (UAV). This is a monocular vision based, simultaneous localization and mapping (SLAM) system, which measures the position and orientation of the camera and builds a map of the environment using a video stream from a single camera. This is different from past SLAM solutions on UAV which use sensors that measure depth, like LIDAR, stereoscopic cameras or depth cameras. Solution presented in this paper extends and significantly modifies a recent open-source algorithm that solves SLAM problem using approach fundamentally different from a traditional approach. Proposed modifications provide the position measurements necessary for the navigation solution on a UAV. The main contributions of this work include: (1) extension of the map building algorithm to enable it to be used realistically while controlling a UAV and simultaneously building the map; (2) improved performance of the SLAM algorithm for lower camera frame rates; and (3) the first known demonstration of a monocular SLAM algorithm successfully controlling a UAV while simultaneously building the map. This work demonstrates that a fully autonomous UAV that uses monocular vision for navigation is feasible.


2017 ◽  
Vol 13 (1) ◽  
pp. 155014771668484 ◽  
Author(s):  
Huthiafa Q Qadori ◽  
Zuriati A Zulkarnain ◽  
Zurina Mohd Hanapi ◽  
Shamala Subramaniam

Recently, wireless sensor networks have employed the concept of mobile agent to reduce energy consumption and obtain effective data gathering. Typically, in data gathering based on mobile agent, it is an important and essential step to find out the optimal itinerary planning for the mobile agent. However, single-agent itinerary planning suffers from two primary disadvantages: task delay and large size of mobile agent as the scale of the network is expanded. Thus, using multi-agent itinerary planning overcomes the drawbacks of single-agent itinerary planning. Despite the advantages of multi-agent itinerary planning, finding the optimal number of distributed mobile agents, source nodes grouping, and optimal itinerary of each mobile agent for simultaneous data gathering are still regarded as critical issues in wireless sensor network. Therefore, in this article, the existing algorithms that have been identified in the literature to address the above issues are reviewed. The review shows that most of the algorithms used one parameter to find the optimal number of mobile agents in multi-agent itinerary planning without utilizing other parameters. More importantly, the review showed that theses algorithms did not take into account the security of the data gathered by the mobile agent. Accordingly, we indicated the limitations of each proposed algorithm and new directions are provided for future research.


2015 ◽  
Vol 2 (2) ◽  
pp. 22-28 ◽  
Author(s):  
Ales Jelinek

The aim of this paper is to provide a brief overview of vector map techniques used in mobile robotics and to present current state of the research in this field at the Brno University of Technology. Vector maps are described as a part of the simultaneous localization and mapping (SLAM) problem in the environment without artificial landmarks or global navigation system. The paper describes algorithms from data acquisition to map building but particular emphasis is put on segmentation, line extraction and scan matching algorithms. All significant algorithms are illustrated with experimental results.


Electronics ◽  
2018 ◽  
Vol 7 (11) ◽  
pp. 305 ◽  
Author(s):  
Seyyed Hoseini ◽  
Peyman Kabiri

Camera tracking and the construction of a robust and accurate map in unknown environments are still challenging tasks in computer vision and robotic applications. Visual Simultaneous Localization and Mapping (SLAM) along with Augmented Reality (AR) are two important applications, and their performance is entirely dependent on the accuracy of the camera tracking routine. This paper presents a novel feature-based approach for the monocular SLAM problem using a hand-held camera in room-sized workspaces with a maximum scene depth of 4–5 m. In the core of the proposed method, there is a Particle Filter (PF) responsible for the estimation of extrinsic parameters of the camera. In addition, contrary to key-frame based methods, the proposed system tracks the camera frame by frame and constructs a robust and accurate map incrementally. Moreover, the proposed algorithm initially constructs a metric sparse map. To this end, a chessboard pattern with a known cell size has been placed in front of the camera for a few frames. This enables the algorithm to accurately compute the pose of the camera and therefore, the depth of the primary detected natural feature points are easily calculated. Afterwards, camera pose estimation for each new incoming frame is carried out in a framework that is merely working with a set of visible natural landmarks. Moreover, to recover the depth of the newly detected landmarks, a delayed approach based on linear triangulation is used. The proposed method is applied to a realworld VGA quality video (640 × 480 pixels) where the translation error of the camera pose is less than 2 cm on average and the orientation error is less than 3 degrees, which indicates the effectiveness and accuracy of the developed algorithm.


2017 ◽  
Vol 9 (4) ◽  
pp. 283-296 ◽  
Author(s):  
Sarquis Urzua ◽  
Rodrigo Munguía ◽  
Antoni Grau

Using a camera, a micro aerial vehicle (MAV) can perform visual-based navigation in periods or circumstances when GPS is not available, or when it is partially available. In this context, the monocular simultaneous localization and mapping (SLAM) methods represent an excellent alternative, due to several limitations regarding to the design of the platform, mobility and payload capacity that impose considerable restrictions on the available computational and sensing resources of the MAV. However, the use of monocular vision introduces some technical difficulties as the impossibility of directly recovering the metric scale of the world. In this work, a novel monocular SLAM system with application to MAVs is proposed. The sensory input is taken from a monocular downward facing camera, an ultrasonic range finder and a barometer. The proposed method is based on the theoretical findings obtained from an observability analysis. Experimental results with real data confirm those theoretical findings and show that the proposed method is capable of providing good results with low-cost hardware.


Electronics ◽  
2020 ◽  
Vol 9 (12) ◽  
pp. 2084
Author(s):  
Junwon Lee ◽  
Kieun Lee ◽  
Aelee Yoo ◽  
Changjoo Moon

Self-driving cars, autonomous vehicles (AVs), and connected cars combine the Internet of Things (IoT) and automobile technologies, thus contributing to the development of society. However, processing the big data generated by AVs is a challenge due to overloading issues. Additionally, near real-time/real-time IoT services play a significant role in vehicle safety. Therefore, the architecture of an IoT system that collects and processes data, and provides services for vehicle driving, is an important consideration. In this study, we propose a fog computing server model that generates a high-definition (HD) map using light detection and ranging (LiDAR) data generated from an AV. The driving vehicle edge node transmits the LiDAR point cloud information to the fog server through a wireless network. The fog server generates an HD map by applying the Normal Distribution Transform-Simultaneous Localization and Mapping(NDT-SLAM) algorithm to the point clouds transmitted from the multiple edge nodes. Subsequently, the coordinate information of the HD map generated in the sensor frame is converted to the coordinate information of the global frame and transmitted to the cloud server. Then, the cloud server creates an HD map by integrating the collected point clouds using coordinate information.


Sensors ◽  
2019 ◽  
Vol 19 (7) ◽  
pp. 1742 ◽  
Author(s):  
Chuang Qian ◽  
Hongjuan Zhang ◽  
Jian Tang ◽  
Bijun Li ◽  
Hui Liu

An indoor map is a piece of infrastructure associated with location-based services. Simultaneous Localization and Mapping (SLAM)-based mobile mapping is an efficient method to construct an indoor map. This paper proposes an SLAM algorithm based on a laser scanner and an Inertial Measurement Unit (IMU) for 2D indoor mapping. A grid-based occupancy likelihood map is chosen as the map representation method and is built from all previous scans. Scan-to-map matching is utilized to find the optimal rigid-body transformation in order to avoid the accumulation of matching errors. Map generation and update are probabilistically motivated. According to the assumption that the orthogonal is the main feature of indoor environments, we propose a lightweight segment extraction method, based on the orthogonal blurred segments (OBS) method. Instead of calculating the parameters of segments, we give the scan points contained in blurred segments a greater weight during the construction of the grid-based occupancy likelihood map, which we call the orthogonal feature weighted occupancy likelihood map (OWOLM). The OWOLM enhances the occupancy likelihood map by fusing the orthogonal features. It can filter out noise scan points, produced by objects, such as glass cabinets and bookcases. Experiments were carried out in a library, which is a representative indoor environment, consisting of orthogonal features. The experimental result proves that, compared with the general occupancy likelihood map, the OWOLM can effectively reduce accumulated errors and construct a clearer indoor map.


Sign in / Sign up

Export Citation Format

Share Document