scholarly journals A Graph Representation Composed of Geometrical Components for Household Furniture Detection by Autonomous Mobile Robots

2018 ◽  
Vol 8 (11) ◽  
pp. 2234 ◽  
Author(s):  
Oscar Alonso-Ramirez ◽  
Antonio Marin-Hernandez ◽  
Homero Rios-Figueroa ◽  
Michel Devy ◽  
Saul Pomares-Hernandez ◽  
...  

This study proposes a framework to detect and recognize household furniture using autonomous mobile robots. The proposed methodology is based on the analysis and integration of geometric features extracted over 3D point clouds. A relational graph is constructed using those features to model and recognize each piece of furniture. A set of sub-graphs corresponding to different partial views allows matching the robot’s perception with partial furniture models. A reduced set of geometric features is employed: horizontal and vertical planes and the legs of the furniture. These features are characterized through their properties, such as: height, planarity and area. A fast and linear method for the detection of some geometric features is proposed, which is based on histograms of 3D points acquired from an RGB-D camera onboard the robot. Similarity measures for geometric features and graphs are proposed, as well. Our proposal has been validated in home-like environments with two different mobile robotic platforms; and partially on some 3D samples of a database.

2017 ◽  
Vol 29 (5) ◽  
pp. 928-934
Author(s):  
Kiyoaki Takahashi ◽  
◽  
Takafumi Ono ◽  
Tomokazu Takahashi ◽  
Masato Suzuki ◽  
...  

Autonomous mobile robots need to acquire surrounding environmental information based on which they perform their self-localizations. Current autonomous mobile robots often use point cloud data acquired by laser range finders (LRFs) instead of image data. In the virtual robot autonomous traveling tests we have conducted in this study, we have evaluated the robot’s self-localization performance on Normal Distributions Transform (NDT) scan matching. This was achieved using 2D and 3D point cloud data to assess whether they perform better self-localizations in case of using 3D or 2D point cloud data.


Author(s):  
E. Grilli ◽  
E. M. Farella ◽  
A. Torresani ◽  
F. Remondino

<p><strong>Abstract.</strong> In the last years, the application of artificial intelligence (Machine Learning and Deep Learning methods) for the classification of 3D point clouds has become an important task in modern 3D documentation and modelling applications. The identification of proper geometric and radiometric features becomes fundamental to classify 2D/3D data correctly. While many studies have been conducted in the geospatial field, the cultural heritage sector is still partly unexplored. In this paper we analyse the efficacy of the geometric covariance features as a support for the classification of Cultural Heritage point clouds. To analyse the impact of the different features calculated on spherical neighbourhoods at various radius sizes, we present results obtained on four different heritage case studies using different features configurations.</p>


2014 ◽  
Vol 26 (2) ◽  
pp. 185-195 ◽  
Author(s):  
Masanobu Saito ◽  
◽  
Kentaro Kiuchi ◽  
Shogo Shimizu ◽  
Takayuki Yokota ◽  
...  

This paper describes navigation systems for autonomous mobile robots taking part in the real-world Tsukuba Challenge 2013 robot competition. Tsukuba Challenge 2013 enables any information on the route to be collected beforehand and used on the day of the challenge. At the same time, however, autonomous mobile robots should function appropriately in daily human life even in areas where they have never been before. System thus need not capture pre-driving details. We analyzed traverses in complex urban areas without prior environmental information using light detection and ranging (LIDAR). We also determined robot status, such as its position and orientation using the gauss maps derived from LIDAR without gyro sensors. Dead reckoning was combined with wheel odometry and orientation from above. We corrected 2D robot poses by matching electronics maps from the Web. Because drift inevitably causes errors, slippage and failure, etc., our robot also traced waypoints derived beforehand from the same electronics map, so localization is consistent even if we do not drive through an area ahead of time. Trajectory candidates are generated along global planning routes based on these waypoints and an optimal trajectory is selected. Tsukuba Challenge 2013 required that robots find specified human targets indicated by features released on the Web. To find the target correctly without driving in Tsukuba beforehand, we searched for point cloud clusters similar to specified human targets based on predefined features. These point clouds were then projected on the camera image at the time, and we extracted points of interest such as SURF to apply fast appearance-based mapping (FAB-MAP). This enabled us to find specified targets highly accurately. To demonstrate the feasibility of our system, experiments were conducted over our university route and over that in the Tsukuba Challenge.


2021 ◽  
Vol 10 (3) ◽  
pp. 187
Author(s):  
Muhammed Enes Atik ◽  
Zaide Duran ◽  
Dursun Zafer Seker

3D scene classification has become an important research field in photogrammetry, remote sensing, computer vision and robotics with the widespread usage of 3D point clouds. Point cloud classification, called semantic labeling, semantic segmentation, or semantic classification of point clouds is a challenging topic. Machine learning, on the other hand, is a powerful mathematical tool used to classify 3D point clouds whose content can be significantly complex. In this study, the classification performance of different machine learning algorithms in multiple scales was evaluated. The feature spaces of the points in the point cloud were created using the geometric features generated based on the eigenvalues of the covariance matrix. Eight supervised classification algorithms were tested in four different areas from three datasets (the Dublin City dataset, Vaihingen dataset and Oakland3D dataset). The algorithms were evaluated in terms of overall accuracy, precision, recall, F1 score and process time. The best overall results were obtained for four test areas with different algorithms. Dublin City Area 1 was obtained with Random Forest as 93.12%, Dublin City Area 2 was obtained with a Multilayer Perceptron algorithm as 92.78%, Vaihingen was obtained as 79.71% with Support Vector Machines and Oakland3D with Linear Discriminant Analysis as 97.30%.


Drones ◽  
2021 ◽  
Vol 5 (4) ◽  
pp. 145
Author(s):  
Alessandra Capolupo

A proper classification of 3D point clouds allows fully exploiting data potentiality in assessing and preserving cultural heritage. Point cloud classification workflow is commonly based on the selection and extraction of respective geometric features. Although several research activities have investigated the impact of geometric features on classification outcomes accuracy, only a few works focused on their accuracy and reliability. This paper investigates the accuracy of 3D point cloud geometric features through a statistical analysis based on their corresponding eigenvalues and covariance with the aim of exploiting their effectiveness for cultural heritage classification. The proposed approach was separately applied on two high-quality 3D point clouds of the All Saints’ Monastery of Cuti (Bari, Southern Italy), generated using two competing survey techniques: Remotely Piloted Aircraft System (RPAS) Structure from Motion (SfM) and Multi View Stereo (MVS) techniques and Terrestrial Laser Scanner (TLS). Point cloud compatibility was guaranteed through re-alignment and co-registration of data. The geometric features accuracy obtained by adopting the RPAS digital photogrammetric and TLS models was consequently analyzed and presented. Lastly, a discussion on convergences and divergences of these results is also provided.


Author(s):  
E. Özdemir ◽  
F. Remondino ◽  
A. Golkar

Abstract. With recent advances in technology, 3D point clouds are getting more and more frequently requested and used, not only for visualization needs but also e.g. by public administrations for urban planning and management. 3D point clouds are also a very frequent source for generating 3D city models which became recently more available for many applications, such as urban development plans, energy evaluation, navigation, visibility analysis and numerous other GIS studies. While the main data sources remained the same (namely aerial photogrammetry and LiDAR), the way these city models are generated have been evolving towards automation with different approaches. As most of these approaches are based on point clouds with proper semantic classes, our aim is to classify aerial point clouds into meaningful semantic classes, e.g. ground level objects (GLO, including roads and pavements), vegetation, buildings’ facades and buildings’ roofs. In this study we tested and evaluated various machine learning algorithms for classification, including three deep learning algorithms and one machine learning algorithm. In the experiments, several hand-crafted geometric features depending on the dataset are used and, unconventionally, these geometric features are used also for deep learning.


Sensor Review ◽  
2020 ◽  
Vol 40 (2) ◽  
pp. 175-182
Author(s):  
Akif Hacinecipoglu ◽  
Erhan Ilhan Konukseven ◽  
Ahmet Bugra Koku

Purpose This study aims to develop a real-time algorithm, which can detect people even in arbitrary poses. To cover poor and changing light conditions, it does not rely on color information. The developed method is expected to run on computers with low computational resources so that it can be deployed on autonomous mobile robots. Design/methodology/approach The method is designed to have a people detection pipeline with a series of operations. Efficient point cloud processing steps with a novel head extraction operation provide possible head clusters in the scene. Classification of these clusters using support vector machines results in high speed and robust people detector. Findings The method is implemented on an autonomous mobile robot and results show that it can detect people with a frame rate of 28 Hz and equal error rate of 92 per cent. Also, in various non-standard poses, the detector is still able to classify people effectively. Research limitations/implications The main limitation would be for point clouds similar to head shape causing false positives and disruptive accessories (like large hats) causing false negatives. Still, these can be overcome with sufficient training samples. Practical implications The method can be used in industrial and social mobile applications because of its robustness, low resource needs and low power consumption. Originality/value The paper introduces a novel and efficient technique to detect people in arbitrary poses, with poor light conditions and low computational resources. Solving all these problems in a single and lightweight method makes the study fulfill an important need for collaborative and autonomous mobile robots.


Sensor Review ◽  
2014 ◽  
Vol 34 (2) ◽  
pp. 220-232 ◽  
Author(s):  
Giulio Reina ◽  
Mauro Bellone ◽  
Luigi Spedicato ◽  
Nicola Ivan Giannoccaro

Purpose – This research aims to address the issue of safe navigation for autonomous vehicles in highly challenging outdoor environments. Indeed, robust navigation of autonomous mobile robots over long distances requires advanced perception means for terrain traversability assessment. Design/methodology/approach – The use of visual systems may represent an efficient solution. This paper discusses recent findings in terrain traversability analysis from RGB-D images. In this context, the concept of point as described only by its Cartesian coordinates is reinterpreted in terms of local description. As a result, a novel descriptor for inferring the traversability of a terrain through its 3D representation, referred to as the unevenness point descriptor (UPD), is conceived. This descriptor features robustness and simplicity. Findings – The UPD-based algorithm shows robust terrain perception capabilities in both indoor and outdoor environment. The algorithm is able to detect obstacles and terrain irregularities. The system performance is validated in field experiments in both indoor and outdoor environments. Research limitations/implications – The UPD enhances the interpretation of 3D scene to improve the ambient awareness of unmanned vehicles. The larger implications of this method reside in its applicability for path planning purposes. Originality/value – This paper describes a visual algorithm for traversability assessment based on normal vectors analysis. The algorithm is simple and efficient providing fast real-time implementation, since the UPD does not require any data processing or previously generated digital elevation map to classify the scene. Moreover, it defines a local descriptor, which can be of general value for segmentation purposes of 3D point clouds and allows the underlining geometric pattern associated with each single 3D point to be fully captured and difficult scenarios to be correctly handled.


Sign in / Sign up

Export Citation Format

Share Document