scholarly journals Real-Time Terrain Storage Generation from Multiple Sensors towards Mobile Robot Operation Interface

2014 ◽  
Vol 2014 ◽  
pp. 1-12
Author(s):  
Wei Song ◽  
Seoungjae Cho ◽  
Yulong Xi ◽  
Kyungeun Cho ◽  
Kyhyun Um

A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots.

Universe ◽  
2020 ◽  
Vol 6 (10) ◽  
pp. 168
Author(s):  
Christopher Marsden ◽  
Francesco Shankar

In this work we present “Astera’’, a cosmological visualization tool that renders a mock universe in real time using Unreal Engine 4. The large scale structure of the cosmic web is hard to visualize in two dimensions, and a 3D real time projection of this distribution allows for an unprecedented view of the large scale universe, with visually accurate galaxies placed in a dynamic 3D world. The underlying data are based on empirical relations assigned using results from N-Body dark matter simulations, and are matched to galaxies with similar morphologies and sizes, images of which are extracted from the Sloan Digital Sky Survey. Within Unreal Engine 4, galaxy images are transformed into textures and dynamic materials (with appropriate transparency) that are applied to static mesh objects with appropriate sizes and locations. To ensure excellent performance, these static meshes are “instanced’’ to utilize the full capabilities of a graphics processing unit. Additional components include a dynamic system for representing accelerated-time active galactic nuclei. The end result is a visually realistic large scale universe that can be explored by a user in real time, with accurate large scale structure. Astera is not yet ready for public release, but we are exploring options to make different versions of the code available for both research and outreach applications.


2013 ◽  
Vol 3 (1-2) ◽  
Author(s):  
Thuong Le-Tien ◽  
Marie Luong ◽  
Thai Phu Ho ◽  
Viet Dai Tran

One of depth cameras such as the Microsoft Kinect is much cheaper than conventional 3D scanning devices, thus it can be acquired for everyday users easily. However, the depth data captured by Kinect over a certain distance is of low quality. In this work, we implement a set of algorithms allowing users to capture 3D surfaces by using the handheld Kinect. As a classic alignment algorithm such as the Iterative Closest Point (ICP) does not show efficacy in aligning point clouds that have limited overlapped regions, another coarse alignment using the Sample Consensus Initial Alignment (SAC-IA) is incorporated in to the registration process in order to ameliorate 3D point clouds’ fitness. Two robust reconstruction methods namely the Alpha Shapes and the Grid Projection are also implemented to reconstruct 3D surface from registered point clouds. The experimental results have shown the efficiency and applicability of of our blueprint. The constructed system obtains acceptable results in a few minutes with a low price device, thus it may practically be an useful approach for avatar generations or online shoppings.


2021 ◽  
Vol 20 (3) ◽  
pp. 1-22
Author(s):  
David Langerman ◽  
Alan George

High-resolution, low-latency apps in computer vision are ubiquitous in today’s world of mixed-reality devices. These innovations provide a platform that can leverage the improving technology of depth sensors and embedded accelerators to enable higher-resolution, lower-latency processing for 3D scenes using depth-upsampling algorithms. This research demonstrates that filter-based upsampling algorithms are feasible for mixed-reality apps using low-power hardware accelerators. The authors parallelized and evaluated a depth-upsampling algorithm on two different devices: a reconfigurable-logic FPGA embedded within a low-power SoC; and a fixed-logic embedded graphics processing unit. We demonstrate that both accelerators can meet the real-time requirements of 11 ms latency for mixed-reality apps. 1


2020 ◽  
Vol 32 ◽  
pp. 03054
Author(s):  
Akshata Parab ◽  
Rashmi Nagare ◽  
Omkar Kolambekar ◽  
Parag Patil

Vision is one of the very essential human senses and it plays a major role in human perception about surrounding environment. But for people with visual impairment their definition of vision is different. Visually impaired people are often unaware of dangers in front of them, even in familiar environment. This study proposes a real time guiding system for visually impaired people for solving their navigation problem and to travel without any difficulty. This system will help the visually impaired people by detecting the objects and giving necessary information about that object. This information may include what the object is, its location, its precision, distance from the visually impaired etc. All these information will be conveyed to the person through audio commands so that they can navigate freely anywhere anytime with no or minimal assistance. Object detection is done using You Only Look Once (YOLO) algorithm. As the process of capturing the video/images and sending it to the main module has to be carried at greater speed, Graphics Processing Unit (GPU) is used. This will help in enhancing the overall speed of the system and will help the visually Impaired to get the maximum necessary instructions as quickly as possible. The process starts from capturing the real time video, sending it for analysis and processing and get the calculated results. The results obtained from analysis are conveyed to user by means of hearing aid. As a result by this system the blind or the visually impaired people can visualize the surrounding environment and travel freely from source to destination on their own.


Proceedings ◽  
2018 ◽  
Vol 2 (18) ◽  
pp. 1193
Author(s):  
Roi Santos ◽  
Xose Pardo ◽  
Xose Fdez-Vidal

The increasing use of autonomous UAVs inside buildings and around human-made structures demands new accurate and comprehensive representation of their operation environments. Most of the 3D scene abstraction methods use invariant feature point matching, nevertheless some sparse 3D point clouds do not concisely represent the structure of the environment. Likewise, line clouds constructed by short and redundant segments with inaccurate directions limit the understanding of scenes as those that include environments with poor texture, or whose texture resembles a repetitive pattern. The presented approach is based on observation and representation models using the straight line segments, whose resemble the limits of an urban indoor or outdoor environment. The goal of the work is to get a full method based on the matching of lines that provides a complementary approach to state-of-the-art methods when facing 3D scene representation of poor texture environments for future autonomous UAV.


2018 ◽  
Vol 7 (12) ◽  
pp. 472 ◽  
Author(s):  
Bo Wan ◽  
Lin Yang ◽  
Shunping Zhou ◽  
Run Wang ◽  
Dezhi Wang ◽  
...  

The road-network matching method is an effective tool for map integration, fusion, and update. Due to the complexity of road networks in the real world, matching methods often contain a series of complicated processes to identify homonymous roads and deal with their intricate relationship. However, traditional road-network matching algorithms, which are mainly central processing unit (CPU)-based approaches, may have performance bottleneck problems when facing big data. We developed a particle-swarm optimization (PSO)-based parallel road-network matching method on graphics-processing unit (GPU). Based on the characteristics of the two main stages (similarity computation and matching-relationship identification), data-partition and task-partition strategies were utilized, respectively, to fully use GPU threads. Experiments were conducted on datasets with 14 different scales. Results indicate that the parallel PSO-based matching algorithm (PSOM) could correctly identify most matching relationships with an average accuracy of 84.44%, which was at the same level as the accuracy of a benchmark—the probability-relaxation-matching (PRM) method. The PSOM approach significantly reduced the road-network matching time in dealing with large amounts of data in comparison with the PRM method. This paper provides a common parallel algorithm framework for road-network matching algorithms and contributes to integration and update of large-scale road-networks.


2012 ◽  
Vol 3 (7) ◽  
pp. 1557 ◽  
Author(s):  
Kenneth K. C. Lee ◽  
Adrian Mariampillai ◽  
Joe X. Z. Yu ◽  
David W. Cadotte ◽  
Brian C. Wilson ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document