scholarly journals Synthetic photometric landmarks used for absolute navigation near an asteroid

2020 ◽  
Vol 124 (1279) ◽  
pp. 1281-1300
Author(s):  
O. Knuuttila ◽  
A. Kestilä ◽  
E. Kallio

AbstractThe need for autonomous location estimation in the form of optical navigation is an essential requirement for forthcoming deep space missions. While crater-based navigation might work well with larger bodies littered with craters, small sub-kilometer bodies do not necessarily have them. We have developed a new pose estimation method for absolute navigation based on photometric local feature extraction techniques thus making it suitable for missions that cannot rely on craters. The algorithm can be used by a navigation filter in conjunction with relative pose estimation such as visual odometry for additional robustness and accuracy. To estimate the position and orientation of the spacecraft in the asteroid-fixed coordinate frame, it uses navigation camera images in combination with other readily available information, such as orientation relative to the stars and the current time for an initial estimate of the asteroid rotation state. Evaluation of the algorithm when using different feature extractors is performed, on one hand, using Monte Carlo simulations and, on the other hand, using actual images taken by the Rosetta spacecraft orbiting the comet 67P/Churyumov–Gerasimenko. Our analysis, where four different feature extraction methods (AKAZE, ORB, SIFT, SURF) were compared, showed that AKAZE is most promising in terms of stability and accuracy.

2021 ◽  
Author(s):  
Ying Bi ◽  
Mengjie Zhang ◽  
Bing Xue

© 2018 IEEE. Feature extraction is an essential process to image classification. Existing feature extraction methods can extract important and discriminative image features but often require domain expert and human intervention. Genetic Programming (GP) can automatically extract features which are more adaptive to different image classification tasks. However, the majority GP-based methods only extract relatively simple features of one type i.e. local or global, which are not effective and efficient for complex image classification. In this paper, a new GP method (GP-GLF) is proposed to achieve automatically and simultaneously global and local feature extraction to image classification. To extract discriminative image features, several effective and well-known feature extraction methods, such as HOG, SIFT and LBP, are employed as GP functions in global and local scenarios. A novel program structure is developed to allow GP-GLF to evolve descriptors that can synthesise feature vectors from the input image and the automatically detected regions using these functions. The performance of the proposed method is evaluated on four different image classification data sets of varying difficulty and compared with seven GP based methods and a set of non-GP methods. Experimental results show that the proposed method achieves significantly better or similar performance than almost all the peer methods. Further analysis on the evolved programs shows the good interpretability of the GP-GLF method.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4366 ◽  
Author(s):  
Francisco Molina Martel ◽  
Juri Sidorenko ◽  
Christoph Bodensteiner ◽  
Michael Arens ◽  
Urs Hugentobler

In this work we introduce a relative localization method that estimates the coordinate frame transformation between two devices based on distance measurements. We present a linear algorithm that calculates the relative pose in 2D or 3D with four degrees of freedom (4-DOF). This algorithm needs a minimum of five or six distance measurements, respectively, to estimate the relative pose uniquely. We use the linear algorithm in conjunction with outlier detection algorithms and as a good initial estimate for iterative least squares refinement. The proposed method outperforms other related linear methods in terms of distance measurements needed and in terms of accuracy. In comparison with a related linear algorithm in 2D, we can reduce 10% of the translation error. In contrast to the more general 6-DOF linear algorithm, our 4-DOF method reduces the minimum distances needed from ten to six and the rotation error by a factor of four at the standard deviation of our ultra-wideband (UWB) transponders. When using the same amount of measurements the orientation error and translation error are approximately reduced to a factor of ten. We validate our method with simulations and an experimental setup, where we integrate ultra-wideband (UWB) technology into simultaneous localization and mapping (SLAM)-based devices. The presented relative pose estimation method is intended for use in augmented reality applications for cooperative localization with head-mounted displays. We foresee practical use cases of this method in cooperative SLAM, where map merging is performed in the most proactive manner.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Ying Miao ◽  
Danyang Shao ◽  
Zhimin Yan

In this paper, we analyze the location-following processing of the image by successive approximation with the need for directed privacy. To solve the detection problem of moving the human body in the dynamic background, the motion target detection module integrates the two ideas of feature information detection and human body model segmentation detection and combines the deep learning framework to complete the detection of the human body by detecting the feature points of key parts of the human body. The detection of human key points depends on the human pose estimation algorithm, so the research in this paper is based on the bottom-up model in the multiperson pose estimation method; firstly, all the human key points in the image are detected by feature extraction through the convolutional neural network, and then the accurate labelling of human key points is achieved by using the heat map and offset fusion optimization method in the feature point confidence map prediction, and finally, the human body detection results are obtained. In the study of the correlation algorithm, this paper combines the HOG feature extraction of the KCF algorithm and the scale filter of the DSST algorithm to form a fusion correlation filter based on the principle study of the MOSSE correlation filter. The algorithm solves the problems of lack of scale estimation of KCF algorithm and low real-time rate of DSST algorithm and improves the tracking accuracy while ensuring the real-time performance of the algorithm.


2021 ◽  
Author(s):  
Ying Bi ◽  
Mengjie Zhang ◽  
Bing Xue

© 2018 IEEE. Feature extraction is an essential process to image classification. Existing feature extraction methods can extract important and discriminative image features but often require domain expert and human intervention. Genetic Programming (GP) can automatically extract features which are more adaptive to different image classification tasks. However, the majority GP-based methods only extract relatively simple features of one type i.e. local or global, which are not effective and efficient for complex image classification. In this paper, a new GP method (GP-GLF) is proposed to achieve automatically and simultaneously global and local feature extraction to image classification. To extract discriminative image features, several effective and well-known feature extraction methods, such as HOG, SIFT and LBP, are employed as GP functions in global and local scenarios. A novel program structure is developed to allow GP-GLF to evolve descriptors that can synthesise feature vectors from the input image and the automatically detected regions using these functions. The performance of the proposed method is evaluated on four different image classification data sets of varying difficulty and compared with seven GP based methods and a set of non-GP methods. Experimental results show that the proposed method achieves significantly better or similar performance than almost all the peer methods. Further analysis on the evolved programs shows the good interpretability of the GP-GLF method.


2020 ◽  
Author(s):  
Vricha Chavan ◽  
​Jimit Shah ◽  
Mrugank Vora ◽  
Mrudula Vora ◽  
Shubhashini Verma

Author(s):  
Meiyan Zhang ◽  
Wenyu Cai

Background: Effective 3D-localization in mobile underwater sensor networks is still an active research topic. Due to the sparse characteristic of underwater sensor networks, AUVs (Autonomous Underwater Vehicles) with precise positioning abilities will benefit cooperative localization. It has important significance to study accurate localization methods. Methods: In this paper, a cooperative and distributed 3D-localization algorithm for sparse underwater sensor networks is proposed. The proposed algorithm combines with the advantages of both recursive location estimation of reference nodes and the outstanding self-positioning ability of mobile AUV. Moreover, our design utilizes MMSE (Minimum Mean Squared Error) based recursive location estimation method in 2D horizontal plane projected from 3D region and then revises positions of un-localized sensor nodes through multiple measurements of Time of Arrival (ToA) with mobile AUVs. Results: Simulation results verify that the proposed cooperative 3D-localization scheme can improve performance in terms of localization coverage ratio, average localization error and localization confidence level. Conclusion: The research can improve localization accuracy and coverage ratio for whole underwater sensor networks.


2021 ◽  
Vol 11 (9) ◽  
pp. 4241
Author(s):  
Jiahua Wu ◽  
Hyo Jong Lee

In bottom-up multi-person pose estimation, grouping joint candidates into the appropriately structured corresponding instance of a person is challenging. In this paper, a new bottom-up method, the Partitioned CenterPose (PCP) Network, is proposed to better cluster the detected joints. To achieve this goal, we propose a novel approach called Partition Pose Representation (PPR) which integrates the instance of a person and its body joints based on joint offset. PPR leverages information about the center of the human body and the offsets between that center point and the positions of the body’s joints to encode human poses accurately. To enhance the relationships between body joints, we divide the human body into five parts, and then, we generate a sub-PPR for each part. Based on this PPR, the PCP Network can detect people and their body joints simultaneously, then group all body joints according to joint offset. Moreover, an improved l1 loss is designed to more accurately measure joint offset. Using the COCO keypoints and CrowdPose datasets for testing, it was found that the performance of the proposed method is on par with that of existing state-of-the-art bottom-up methods in terms of accuracy and speed.


Sign in / Sign up

Export Citation Format

Share Document