scholarly journals Automatic Super-Surface Removal in Complex 3D Indoor Environments Using Iterative Region-Based RANSAC

Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3724
Author(s):  
Ali Ebrahimi ◽  
Stephen Czarnuch

Removing bounding surfaces such as walls, windows, curtains, and floor (i.e., super-surfaces) from a point cloud is a common task in a wide variety of computer vision applications (e.g., object recognition and human tracking). Popular plane segmentation methods such as Random Sample Consensus (RANSAC), are widely used to segment and remove surfaces from a point cloud. However, these estimators easily result in the incorrect association of foreground points to background bounding surfaces because of the stochasticity of randomly sampling, and the limited scene-specific knowledge used by these approaches. Additionally, identical approaches are generally used to detect bounding surfaces and surfaces that belong to foreground objects. Detecting and removing bounding surfaces in challenging (i.e., cluttered and dynamic) real-world scene can easily result in the erroneous removal of points belonging to desired foreground objects such as human bodies. To address these challenges, we introduce a novel super-surface removal technique for 3D complex indoor environments. Our method was developed to work with unorganized data captured from commercial depth sensors and supports varied sensor perspectives. We begin with preprocessing steps and dividing the input point cloud into four overlapped local regions. Then, we apply an iterative surface removal approach to all four regions to segment and remove the bounding surfaces. We evaluate the performance of our proposed method in terms of four conventional metrics: specificity, precision, recall, and F1 score, on three generated datasets representing different indoor environments. Our experimental results demonstrate that our proposed method is a robust super-surface removal and size reduction approach for complex 3D indoor environments while scoring the four evaluation metrics between 90% and 99%.

2020 ◽  
Vol 961 (7) ◽  
pp. 47-55
Author(s):  
A.G. Yunusov ◽  
A.J. Jdeed ◽  
N.S. Begliarov ◽  
M.A. Elshewy

Laser scanning is considered as one of the most useful and fast technologies for modelling. On the other hand, the size of scan results can vary from hundreds to several million points. As a result, the large volume of the obtained clouds leads to complication at processing the results and increases the time costs. One way to reduce the volume of a point cloud is segmentation, which reduces the amount of data from several million points to a limited number of segments. In this article, we evaluated effect on the performance, the accuracy of various segmentation methods and the geometric accuracy of the obtained models at density changes taking into account the processing time. The results of our experiment were compared with reference data in a form of comparative analysis. As a conclusion, some recommendations for choosing the best segmentation method were proposed.


2021 ◽  
Vol 11 (4) ◽  
pp. 1953
Author(s):  
Francisco Martín ◽  
Fernando González ◽  
José Miguel Guerrero ◽  
Manuel Fernández ◽  
Jonatan Ginés

The perception and identification of visual stimuli from the environment is a fundamental capacity of autonomous mobile robots. Current deep learning techniques make it possible to identify and segment objects of interest in an image. This paper presents a novel algorithm to segment the object’s space from a deep segmentation of an image taken by a 3D camera. The proposed approach solves the boundary pixel problem that appears when a direct mapping from segmented pixels to their correspondence in the point cloud is used. We validate our approach by comparing baseline approaches using real images taken by a 3D camera, showing that our method outperforms their results in terms of accuracy and reliability. As an application of the proposed algorithm, we present a semantic mapping approach for a mobile robot’s indoor environments.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 642
Author(s):  
Luis Miguel González de Santos ◽  
Ernesto Frías Nores ◽  
Joaquín Martínez Sánchez ◽  
Higinio González Jorge

Nowadays, unmanned aerial vehicles (UAVs) are extensively used for multiple purposes, such as infrastructure inspections or surveillance. This paper presents a real-time path planning algorithm in indoor environments designed to perform contact inspection tasks using UAVs. The only input used by this algorithm is the point cloud of the building where the UAV is going to navigate. The algorithm is divided into two main parts. The first one is the pre-processing algorithm that processes the point cloud, segmenting it into rooms and discretizing each room. The second part is the path planning algorithm that has to be executed in real time. In this way, all the computational load is in the first step, which is pre-processed, making the path calculation algorithm faster. The method has been tested in different buildings, measuring the execution time for different paths calculations. As can be seen in the results section, the developed algorithm is able to calculate a new path in 8–9 milliseconds. The developed algorithm fulfils the execution time restrictions, and it has proven to be reliable for route calculation.


Sensors ◽  
2020 ◽  
Vol 20 (14) ◽  
pp. 3848
Author(s):  
Xinyue Zhang ◽  
Gang Liu ◽  
Ling Jing ◽  
Siyao Chen

The heart girth parameter is an important indicator reflecting the growth and development of pigs that provides critical guidance for the optimization of healthy pig breeding. To overcome the heavy workloads and poor adaptability of traditional measurement methods currently used in pig breeding, this paper proposes an automated pig heart girth measurement method using two Kinect depth sensors. First, a two-view pig depth image acquisition platform is established for data collection; the two-view point clouds after preprocessing are registered and fused by feature-based improved 4-Point Congruent Set (4PCS) method. Second, the fused point cloud is pose-normalized, and the axillary contour is used to automatically extract the heart girth measurement point. Finally, this point is taken as the starting point to intercept the circumferential perpendicular to the ground from the pig point cloud, and the complete heart girth point cloud is obtained by mirror symmetry. The heart girth is measured along this point cloud using the shortest path method. Using the proposed method, experiments were conducted on two-view data from 26 live pigs. The results showed that the heart girth measurement absolute errors were all less than 4.19 cm, and the average relative error was 2.14%, which indicating a high accuracy and efficiency of this method.


Aerospace ◽  
2018 ◽  
Vol 5 (3) ◽  
pp. 94 ◽  
Author(s):  
Hriday Bavle ◽  
Jose Sanchez-Lopez ◽  
Paloma Puente ◽  
Alejandro Rodriguez-Ramos ◽  
Carlos Sampedro ◽  
...  

This paper presents a fast and robust approach for estimating the flight altitude of multirotor Unmanned Aerial Vehicles (UAVs) using 3D point cloud sensors in cluttered, unstructured, and dynamic indoor environments. The objective is to present a flight altitude estimation algorithm, replacing the conventional sensors such as laser altimeters, barometers, or accelerometers, which have several limitations when used individually. Our proposed algorithm includes two stages: in the first stage, a fast clustering of the measured 3D point cloud data is performed, along with the segmentation of the clustered data into horizontal planes. In the second stage, these segmented horizontal planes are mapped based on the vertical distance with respect to the point cloud sensor frame of reference, in order to provide a robust flight altitude estimation even in presence of several static as well as dynamic ground obstacles. We validate our approach using the IROS 2011 Kinect dataset available in the literature, estimating the altitude of the RGB-D camera using the provided 3D point clouds. We further validate our approach using a point cloud sensor on board a UAV, by means of several autonomous real flights, closing its altitude control loop using the flight altitude estimated by our proposed method, in presence of several different static as well as dynamic ground obstacles. In addition, the implementation of our approach has been integrated in our open-source software framework for aerial robotics called Aerostack.


Author(s):  
Mathieu Dubois ◽  
Paola K. Rozo ◽  
Alexander Gepperth ◽  
O. Fabio A. Gonzalez ◽  
David Filliat

2020 ◽  
Vol 49 (2-3) ◽  
Author(s):  
Aliki Konsolaki ◽  
Emmanuel Vassilakis ◽  
Leonidas Gouliotis ◽  
Georgios Kontostavlos ◽  
Vassilis Giannopoulos

Remote sensing techniques and laser scanning technology have given us the opportunity to study indoor environments, such as caves, with their complex and unique morphology. In the presented case study, we used a handheld laser scanner for acquiring points with projected coordinate information (X, Y, Z) covering the entire show cave of Koutouki; including its hidden passages and dark corners. The point cloud covers the floor, the walls, and the roof of the cave, as well as the stalactites, stalagmites and the connected columns that constitute the decoration of the cave. The absolute and exact placement of the point cloud within a geographic reference frame gives us the opportunity for three-dimensional measurements and detailed visualization of the subsurface structures. Using open - source software, we managed to make a quantification analysis of the terrain and generated morphological and geometric features of the speleothems. We identified 55 columns by using digital terrain analysis and processed them statistically in order to correlate them to the frame of the cave development. The parameters that derived are the contours, each column height, the speleothem geometry and volume, as well as the volume of the open space cavity. We argue that by the demonstrated methodology, it is possible to identify with high accuracy and detail: the geomorphological features of a cave, an estimate of the speleogenesis, and the ability to monitor the evolution of a karstic system.Key words: cave, laser scanner, 3D representation, speleothems, SLAM.  


Author(s):  
A. Masiero ◽  
F. Fissore ◽  
A. Guarnieri ◽  
A. Vettore

The subject of photogrammetric surveying with mobile devices, in particular smartphones, is becoming of significant interest in the research community. Nowadays, the process of providing 3D point clouds with photogrammetric procedures is well known. However, external information is still typically needed in order to move from the point cloud obtained from images to a 3D metric reconstruction. This paper investigates the integration of information provided by an UWB positioning system with visual based reconstruction to produce a metric reconstruction. Furthermore, the orientation (with respect to North-East directions) of the obtained model is assessed thanks to the use of inertial sensors included in the considered UWB devices. Results of this integration are shown on two case studies in indoor environments.


Sign in / Sign up

Export Citation Format

Share Document