scholarly journals Staircase Detection, Characterization and Approach Pipeline for Search and Rescue Robots

2021 ◽  
Vol 11 (22) ◽  
pp. 10736
Author(s):  
José Armando Sánchez-Rojas ◽  
José Aníbal Arias-Aguilar ◽  
Hiroshi Takemura ◽  
Alberto Elías Petrilli-Barceló

Currently, most rescue robots are mainly teleoperated and integrate some level of autonomy to reduce the operator’s workload, allowing them to focus on the primary mission tasks. One of the main causes of mission failure are human errors and increasing the robot’s autonomy can increase the probability of success. For this reason, in this work, a stair detection and characterization pipeline is presented. The pipeline is tested on a differential drive robot using the ROS middleware, YOLOv4-tiny and a region growing based clustering algorithm. The pipeline’s staircase detector was implemented using the Neural Compute Engines (NCEs) of the OpenCV AI Kit with Depth (OAK-D) RGB-D camera, which allowed the implementation using the robot’s computer without a GPU and, thus, could be implemented in similar robots to increase autonomy. Furthermore, by using this pipeline we were able to implement a Fuzzy controller that allows the robot to align itself, autonomously, with the staircase. Our work can be used in different robots running the ROS middleware and can increase autonomy, allowing the operator to focus on the primary mission tasks. Furthermore, due to the design of the pipeline, it can be used with different types of RGB-D cameras, including those that generate noisy point clouds from low disparity depth images.

2021 ◽  
Vol 13 (9) ◽  
pp. 1859
Author(s):  
Xiangyang Liu ◽  
Yaxiong Wang ◽  
Feng Kang ◽  
Yang Yue ◽  
Yongjun Zheng

The characteristic parameters of Citrus grandis var. Longanyou canopies are important when measuring yield and spraying pesticides. However, the feasibility of the canopy reconstruction method based on point clouds has not been confirmed with these canopies. Therefore, LiDAR point cloud data for C. grandis var. Longanyou were obtained to facilitate the management of groves of this species. Then, a cloth simulation filter and European clustering algorithm were used to realize individual canopy extraction. After calculating canopy height and width, canopy reconstruction and volume calculation were realized using six approaches: by a manual method and using five algorithms based on point clouds (convex hull, CH; convex hull by slices; voxel-based, VB; alpha-shape, AS; alpha-shape by slices, ASBS). ASBS is an innovative algorithm that combines AS with slices optimization, and can best approximate the actual canopy shape. Moreover, the CH algorithm had the shortest run time, and the R2 values of VCH, VVB, VAS, and VASBS algorithms were above 0.87. The volume with the highest accuracy was obtained from the ASBS algorithm, and the CH algorithm had the shortest computation time. In addition, a theoretical but preliminarily system suitable for the calculation of the canopy volume of C. grandis var. Longanyou was developed, which provides a theoretical reference for the efficient and accurate realization of future functional modules such as accurate plant protection, orchard obstacle avoidance, and biomass estimation.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3347 ◽  
Author(s):  
Zhishuang Yang ◽  
Bo Tan ◽  
Huikun Pei ◽  
Wanshou Jiang

The classification of point clouds is a basic task in airborne laser scanning (ALS) point cloud processing. It is quite a challenge when facing complex observed scenes and irregular point distributions. In order to reduce the computational burden of the point-based classification method and improve the classification accuracy, we present a segmentation and multi-scale convolutional neural network-based classification method. Firstly, a three-step region-growing segmentation method was proposed to reduce both under-segmentation and over-segmentation. Then, a feature image generation method was used to transform the 3D neighborhood features of a point into a 2D image. Finally, feature images were treated as the input of a multi-scale convolutional neural network for training and testing tasks. In order to obtain performance comparisons with existing approaches, we evaluated our framework using the International Society for Photogrammetry and Remote Sensing Working Groups II/4 (ISPRS WG II/4) 3D labeling benchmark tests. The experiment result, which achieved 84.9% overall accuracy and 69.2% of average F1 scores, has a satisfactory performance over all participating approaches analyzed.


2018 ◽  
Author(s):  
Arriel Benis ◽  
Nissim Harel ◽  
Refael Barak Barkan ◽  
Einav Srulovici ◽  
Calanit Key

BACKGROUND Data collected by health care organizations consist of medical information and documentation of interactions with patients through different communication channels. This enables the health care organization to measure various features of its performance such as activity, efficiency, adherence to a treatment, and different quality indicators. This information can be linked to sociodemographic, clinical, and communication data with the health care providers and administrative teams. Analyzing all these measurements together may provide insights into the different types of patient behaviors or more accurately to the different types of interactions patients have with the health care organizations. OBJECTIVE The primary aim of this study is to characterize usage profiles of the available communication channels with the health care organization. The main objective is to suggest new ways to encourage the usage of the most appropriate communication channel based on the patient’s profile. The first hypothesis is that the patient’s follow-up and clinical outcomes are influenced by the patient’s preferred communication channels with the health care organization. The second hypothesis is that the adoption of newly introduced communication channels between the patient and the health care organization is influenced by the patient’s sociodemographic or clinical profile. The third hypothesis is that the introduction of a new communication channel influences the usage of existing communication channels. METHODS All relevant data will be extracted from the Clalit Health Services data warehouse, the largest health care management organization in Israel. Data analysis process will use data mining approach as a process of discovering new knowledge and dealing with processing data extracted with statistical methods, machine learning algorithms, and information visualization tools. More specifically, we will mainly use the k-means clustering algorithm for discretization purposes and patients’ profile building, a hierarchical clustering algorithm, and heat maps for generating a visualization of the different communication profiles. In addition, patients’ interviews will be conducted to complement the information drawn from the data analysis phase with the aim of suggesting ways to optimize existing communication flows. RESULTS The project was funded in 2016. Data analysis is currently under way and the results are expected to be submitted for publication in 2019. Identification of patient profiles will allow the health care organization to improve its accessibility to patients and their engagement, which in turn will achieve a better treatment adherence, quality of care, and patient experience. CONCLUSIONS Defining solutions to increase patient accessibility to health care organization by matching the communication channels to the patient’s profile and to change the health care organization’s communication with the patient to a highly proactive one will increase the patient’s engagement according to his or her profile. INTERNATIONAL REGISTERED REPOR RR1-10.2196/10734


Author(s):  
Z. Sha ◽  
Y. Chen ◽  
W. Li ◽  
C. Wang ◽  
A. Nurunnabi ◽  
...  

Abstract. Road extraction plays a significant role in production of high definition maps (HD maps). This paper presents a novel boundary-enhanced supervoxel segmentation method for extracting road edge contours from MLS point clouds. The proposed method first leverages normal feature judgment to obtain 3D point clouds global geometric information, then clusters points according to an existing method with global geometric information to enhance the boundaries. Finally, it utilizes the neighbor spatial distance metric to extract the contours and drop out existing outliers. The proposed method is tested on two datasets acquired by a RIEGL VMX-450 MLS system that contain the major point cloud scenes with different types of road boundaries. The experimental results demonstrate that the proposed method provides a promising solution for extracting contours efficiently and completely. Results show that the precision values are 1.5 times higher and approximately equal than the other two existing methods when the recall value is 0 for both tested two road datasets.


Author(s):  
Y. Cao ◽  
M. Previtali ◽  
M. Scaioni

Abstract. In the wake of the success of Deep Learning Networks (DLN) for image recognition, object detection, shape classification and semantic segmentation, this approach has proven to be both a major breakthrough and an excellent tool in point cloud classification. However, understanding how different types of DLN achieve still lacks. In several studies the output of segmentation/classification process is compared against benchmarks, but the network is treated as a “black-box” and intermediate steps are not deeply analysed. Specifically, here the following questions are discussed: (1) what exactly did DLN learn from a point cloud? (2) On the basis of what information do DLN make decisions? To conduct such a quantitative investigation of these DLN applied to point clouds, this paper investigates the visual interpretability for the decision-making process. Firstly, we introduce a reconstruction network able to reconstruct and visualise the learned features, in order to face with question (1). Then, we propose 3DCAM to indicate the discriminative point cloud regions used by these networks to identify that category, thus dealing with question (2). Through answering the above two questions, the paper would like to offer some initial solutions to better understand the application of DLN to point clouds.


2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
Lopamudra Dey ◽  
Sanjay Chakraborty

“Clustering” the significance and application of this technique is spread over various fields. Clustering is an unsupervised process in data mining, that is why the proper evaluation of the results and measuring the compactness and separability of the clusters are important issues. The procedure of evaluating the results of a clustering algorithm is known as cluster validity measure. Different types of indexes are used to solve different types of problems and indices selection depends on the kind of available data. This paper first proposes Canonical PSO based K-means clustering algorithm and also analyses some important clustering indices (intercluster, intracluster) and then evaluates the effects of those indices on real-time air pollution database, wholesale customer, wine, and vehicle datasets using typical K-means, Canonical PSO based K-means, simple PSO based K-means, DBSCAN, and Hierarchical clustering algorithms. This paper also describes the nature of the clusters and finally compares the performances of these clustering algorithms according to the validity assessment. It also defines which algorithm will be more desirable among all these algorithms to make proper compact clusters on this particular real life datasets. It actually deals with the behaviour of these clustering algorithms with respect to validation indexes and represents their results of evaluation in terms of mathematical and graphical forms.


2019 ◽  
Vol 11 (23) ◽  
pp. 2727 ◽  
Author(s):  
Ming Huang ◽  
Pengcheng Wei ◽  
Xianglei Liu

Plane segmentation is a basic yet important process in light detection and ranging (LiDAR) point cloud processing. The traditional point cloud plane segmentation algorithm is typically affected by the number of point clouds and the noise data, which results in slow segmentation efficiency and poor segmentation effect. Hence, an efficient encoding voxel-based segmentation (EVBS) algorithm based on a fast adjacent voxel search is proposed in this study. First, a binary octree algorithm is proposed to construct the voxel as the segmentation object and code the voxel, which can compute voxel features quickly and accurately. Second, a voxel-based region growing algorithm is proposed to cluster the corresponding voxel to perform the initial point cloud segmentation, which can improve the rationality of seed selection. Finally, a refining point method is proposed to solve the problem of under-segmentation in unlabeled voxels by judging the relationship between the points and the segmented plane. Experimental results demonstrate that the proposed algorithm is better than the traditional algorithm in terms of computation time, extraction accuracy, and recall rate.


2014 ◽  
Vol 513-517 ◽  
pp. 4193-4196
Author(s):  
Wen Bao Qiao ◽  
Ming Guo ◽  
Jun Jie Liu

In this paper, we propose an efficient way to produce an initial transposed matrix for two point clouds, which can effectively avoid the drawback of local optimism caused by using standard Iterative Closest Points (ICP)[ algorithm when matching two point clouds. In our approach, the correspondences used to calculate the transposed matrix are confirmed before the point cloud forms. We use the depth images which have been carefully target-segmented to find the boundaries of the shapes that reflect different views of the same target object. Then each contour is affected by curvature scale space (CSS)[ method to find a sequence of characteristic points. After that, our method is applied on these characteristic points to find the most matching pairs of points. Finally, we convert the matched characteristic points to 3D points, and the correspondences are there being confirmed. We can use them to compute an initial transposed matrix to tell the computer which part of the first point cloud should be matched to the second. In this way, we put the two point clouds in a correct initial location, so that the local optimism of ICP and its variations can be excluded.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5331
Author(s):  
Ouk Choi ◽  
Min-Gyu Park ◽  
Youngbae Hwang

We present two algorithms for aligning two colored point clouds. The two algorithms are designed to minimize a probabilistic cost based on the color-supported soft matching of points in a point cloud to their K-closest points in the other point cloud. The first algorithm, like prior iterative closest point algorithms, refines the pose parameters to minimize the cost. Assuming that the point clouds are obtained from RGB-depth images, our second algorithm regards the measured depth values as variables and minimizes the cost to obtain refined depth values. Experiments with our synthetic dataset show that our pose refinement algorithm gives better results compared to the existing algorithms. Our depth refinement algorithm is shown to achieve more accurate alignments from the outputs of the pose refinement step. Our algorithms are applied to a real-world dataset, providing accurate and visually improved results.


Sign in / Sign up

Export Citation Format

Share Document