scholarly journals Pose Estimation of Sweet Pepper through Symmetry Axis Detection

Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 3083 ◽  
Author(s):  
Hao Li ◽  
Qibing Zhu ◽  
Min Huang ◽  
Ya Guo ◽  
Jianwei Qin

The space pose of fruits is necessary for accurate detachment in automatic harvesting. This study presents a novel pose estimation method for sweet pepper detachment. In this method, the normal to the local plane at each point in the sweet-pepper point cloud was first calculated. The point cloud was separated by a number of candidate planes, and the scores of each plane were then separately calculated using the scoring strategy. The plane with the lowest score was selected as the symmetry plane of the point cloud. The symmetry axis could be finally calculated from the selected symmetry plane, and the pose of sweet pepper in the space was obtained using the symmetry axis. The performance of the proposed method was evaluated by simulated and sweet-pepper cloud dataset tests. In the simulated test, the average angle error between the calculated symmetry and real axes was approximately 6.5°. In the sweet-pepper cloud dataset test, the average error was approximately 7.4° when the peduncle was removed. When the peduncle of sweet pepper was complete, the average error was approximately 6.9°. These results suggested that the proposed method was suitable for pose estimation of sweet peppers and could be adjusted for use with other fruits and vegetables.

Author(s):  
Zhiming Chen ◽  
Lei Li ◽  
Yunhua Wu ◽  
Bing Hua ◽  
Kang Niu

Purpose On-orbit service technology is one of the key technologies of space manipulation activities such as spacecraft life extension, fault spacecraft capture, on-orbit debris removal and so on. It is known that the failure satellites, space debris and enemy spacecrafts in space are almost all non-cooperative targets. Relatively accurate pose estimation is critical to spatial operations, but also a recognized technical difficulty because of the undefined prior information of non-cooperative targets. With the rapid development of laser radar, the application of laser scanning equipment is increasing in the measurement of non-cooperative targets. It is necessary to research a new pose estimation method for non-cooperative targets based on 3D point cloud. The paper aims to discuss these issues. Design/methodology/approach In this paper, a method based on the inherent characteristics of a spacecraft is proposed for estimating the pose (position and attitude) of the spatial non-cooperative target. First, we need to preprocess the obtained point cloud to reduce noise and improve the quality of data. Second, according to the features of the satellite, a recognition system used for non-cooperative measurement is designed. The components which are common in the configuration of satellite are chosen as the recognized object. Finally, based on the identified object, the ICP algorithm is used to calculate the pose between two frames of point cloud in different times to finish pose estimation. Findings The new method enhances the matching speed and improves the accuracy of pose estimation compared with traditional methods by reducing the number of matching points. The recognition of components on non-cooperative spacecraft directly contributes to the space docking, on-orbit capture and relative navigation. Research limitations/implications Limited to the measurement distance of the laser radar, this paper considers the pose estimation for non-cooperative spacecraft in the close range. Practical implications The pose estimation method for non-cooperative spacecraft in this paper is mainly applied to close proximity space operations such as final rendezvous phase of spacecraft or ultra-close approaching phase of target capture. The system can recognize components needed to be capture and provide the relative pose of non-cooperative spacecraft. The method in this paper is more robust compared with the traditional single component recognition method and overall matching method when scanning of laser radar is not complete or the components are blocked. Originality/value This paper introduces a new pose estimation method for non-cooperative spacecraft based on point cloud. The experimental results show that the proposed method can effectively identify the features of non-cooperative targets and track their position and attitude. The method is robust to the noise and greatly improves the speed of pose estimation while guarantee the accuracy.


Author(s):  
Zihao Zhang ◽  
Lei Hu ◽  
Xiaoming Deng ◽  
Shihong Xia

3D human pose estimation is a fundamental problem in artificial intelligence, and it has wide applications in AR/VR, HCI and robotics. However, human pose estimation from point clouds still suffers from noisy points and estimated jittery artifacts because of handcrafted-based point cloud sampling and single-frame-based estimation strategies. In this paper, we present a new perspective on the 3D human pose estimation method from point cloud sequences. To sample effective point clouds from input, we design a differentiable point cloud sampling method built on density-guided attention mechanism. To avoid the jitter caused by previous 3D human pose estimation problems, we adopt temporal information to obtain more stable results. Experiments on the ITOP dataset and the NTU-RGBD dataset demonstrate that all of our contributed components are effective, and our method can achieve state-of-the-art performance.


Author(s):  
Yapeng Gao

For table tennis robots, it is a significant challenge to understand the opponent's movements and return the ball accordingly with high performance. One has to cope with various ball speeds and spins resulting from different stroke types. In this paper, we propose a real-time 6D racket pose detection method and classify racket movements into five stroke categories with a neural network. By using two monocular cameras, we can extract the racket's contours and choose some special points as feature points in image coordinates. With the 3D geometrical information of a racket, a wide baseline stereo matching method is proposed to find the corresponding feature points and compute the 3D position and orientation of the racket by triangulation and plane fitting. Then, a Kalman filter is adopted to track the racket pose, and a multilayer perceptron (MLP) neural network is used to classify the pose movements. We conduct two experiments to evaluate the accuracy of racket pose detection and classification, in which the average error in position and orientation is around 7.8 mm and 7.2 by comparing with the ground truth from a KUKA robot. The classification accuracy is 98%, the same as the human pose estimation method with Convolutional Pose Machines (CPMs).


Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3489
Author(s):  
Bo Gu ◽  
Jianxun Liu ◽  
Huiyuan Xiong ◽  
Tongtong Li ◽  
Yuelong Pan

In the vehicle pose estimation task based on roadside Lidar in cooperative perception, the measurement distance, angle, and laser resolution directly affect the quality of the target point cloud. For incomplete and sparse point clouds, current methods are either less accurate in correspondences solved by local descriptors or not robust enough due to the reduction of effective boundary points. In response to the above weakness, this paper proposed a registration algorithm Environment Constraint Principal Component-Iterative Closest Point (ECPC-ICP), which integrated road information constraints. The road normal feature was extracted, and the principal component of the vehicle point cloud matrix under the road normal constraint was calculated as the initial pose result. Then, an accurate 6D pose was obtained through point-to-point ICP registration. According to the measurement characteristics of the roadside Lidars, this paper defined the point cloud sparseness description. The existing algorithms were tested on point cloud data with different sparseness. The simulated experimental results showed that the positioning MAE of ECPC-ICP was about 0.5% of the vehicle scale, the orientation MAE was about 0.26°, and the average registration success rate was 95.5%, which demonstrated an improvement in accuracy and robustness compared with current methods. In the real test environment, the positioning MAE was about 2.6% of the vehicle scale, and the average time cost was 53.19 ms, proving the accuracy and effectiveness of ECPC-ICP in practical applications.


Sensors ◽  
2019 ◽  
Vol 19 (2) ◽  
pp. 428 ◽  
Author(s):  
Guichao Lin ◽  
Yunchao Tang ◽  
Xiangjun Zou ◽  
Juntao Xiong ◽  
Jinhui Li

Fruit detection in real outdoor conditions is necessary for automatic guava harvesting, and the branch-dependent pose of fruits is also crucial to guide a robot to approach and detach the target fruit without colliding with its mother branch. To conduct automatic, collision-free picking, this study investigates a fruit detection and pose estimation method by using a low-cost red–green–blue–depth (RGB-D) sensor. A state-of-the-art fully convolutional network is first deployed to segment the RGB image to output a fruit and branch binary map. Based on the fruit binary map and RGB-D depth image, Euclidean clustering is then applied to group the point cloud into a set of individual fruits. Next, a multiple three-dimensional (3D) line-segments detection method is developed to reconstruct the segmented branches. Finally, the 3D pose of the fruit is estimated using its center position and nearest branch information. A dataset was acquired in an outdoor orchard to evaluate the performance of the proposed method. Quantitative experiments showed that the precision and recall of guava fruit detection were 0.983 and 0.948, respectively; the 3D pose error was 23.43° ± 14.18°; and the execution time per fruit was 0.565 s. The results demonstrate that the developed method can be applied to a guava-harvesting robot.


2021 ◽  
Vol 11 (9) ◽  
pp. 4241
Author(s):  
Jiahua Wu ◽  
Hyo Jong Lee

In bottom-up multi-person pose estimation, grouping joint candidates into the appropriately structured corresponding instance of a person is challenging. In this paper, a new bottom-up method, the Partitioned CenterPose (PCP) Network, is proposed to better cluster the detected joints. To achieve this goal, we propose a novel approach called Partition Pose Representation (PPR) which integrates the instance of a person and its body joints based on joint offset. PPR leverages information about the center of the human body and the offsets between that center point and the positions of the body’s joints to encode human poses accurately. To enhance the relationships between body joints, we divide the human body into five parts, and then, we generate a sub-PPR for each part. Based on this PPR, the PCP Network can detect people and their body joints simultaneously, then group all body joints according to joint offset. Moreover, an improved l1 loss is designed to more accurately measure joint offset. Using the COCO keypoints and CrowdPose datasets for testing, it was found that the performance of the proposed method is on par with that of existing state-of-the-art bottom-up methods in terms of accuracy and speed.


2021 ◽  
Vol 13 (4) ◽  
pp. 803
Author(s):  
Lingchen Lin ◽  
Kunyong Yu ◽  
Xiong Yao ◽  
Yangbo Deng ◽  
Zhenbang Hao ◽  
...  

As a key canopy structure parameter, the estimation method of the Leaf Area Index (LAI) has always attracted attention. To explore a potential method to estimate forest LAI from 3D point cloud at low cost, we took photos from different angles of the drone and set five schemes (O (0°), T15 (15°), T30 (30°), OT15 (0° and 15°) and OT30 (0° and 30°)), which were used to reconstruct 3D point cloud of forest canopy based on photogrammetry. Subsequently, the LAI values and the leaf area distribution in the vertical direction derived from five schemes were calculated based on the voxelized model. Our results show that the serious lack of leaf area in the middle and lower layers determines that the LAI estimate of O is inaccurate. For oblique photogrammetry, schemes with 30° photos always provided better LAI estimates than schemes with 15° photos (T30 better than T15, OT30 better than OT15), mainly reflected in the lower part of the canopy, which is particularly obvious in low-LAI areas. The overall structure of the single-tilt angle scheme (T15, T30) was relatively complete, but the rough point cloud details could not reflect the actual situation of LAI well. Multi-angle schemes (OT15, OT30) provided excellent leaf area estimation (OT15: R2 = 0.8225, RMSE = 0.3334 m2/m2; OT30: R2 = 0.9119, RMSE = 0.1790 m2/m2). OT30 provided the best LAI estimation accuracy at a sub-voxel size of 0.09 m and the best checkpoint accuracy (OT30: RMSE [H] = 0.2917 m, RMSE [V] = 0.1797 m). The results highlight that coupling oblique photography and nadiral photography can be an effective solution to estimate forest LAI.


2021 ◽  
Author(s):  
Weiqian Guo ◽  
Rendong Ying ◽  
Peilin Liu ◽  
Weihang Wang

Sign in / Sign up

Export Citation Format

Share Document