An improved method for compensating ultra-tiny electromagnetic tracker utilizing position and orientation information and its application to a flexible neuroendoscopic surgery navigation system

Author(s):  
Zhengang Jiang ◽  
Kensaku Mori ◽  
Yukitaka Nimura ◽  
Marco Feuerstein ◽  
Takayuki Kitasaka ◽  
...  
2019 ◽  
Vol 9 (14) ◽  
pp. 2808 ◽  
Author(s):  
Yahui Peng ◽  
Xiaochen Liu ◽  
Chong Shen ◽  
Haoqian Huang ◽  
Donghua Zhao ◽  
...  

Aiming at enhancing the accuracy and reliability of velocity calculation in vision navigation, an improved method is proposed in this paper. The method integrates Mask-R-CNN (Mask Region-based Convolutional Neural Network) and K-Means with the pyramid Lucas Kanade algorithm in order to reduce the harmful effect of moving objects on velocity calculation. Firstly, Mask-R-CNN is used to recognize the objects which have motions relative to the ground and covers them with masks to enhance the similarity between pixels and to reduce the impacts of the noisy moving pixels. Then, the pyramid Lucas Kanade algorithm is used to calculate the optical flow value. Finally, the value is clustered by the K-Means algorithm to abandon the outliers, and vehicle velocity is calculated by the processed optical flow. The prominent advantages of the proposed algorithm are (i) decreasing the bad impacts to velocity calculation, due to the objects which have relative motions; (ii) obtaining the correct optical flow sets and velocity calculation outputs with less fluctuation; and (iii) the applicability enhancement of the optical flow algorithm in complex navigation environment. The proposed algorithm is tested by actual experiments. Results with superior precision and reliability show the feasibility and effectiveness of the proposed method for vehicle velocity calculation in vision navigation system.


2013 ◽  
Vol 7 (2) ◽  
pp. 182-189 ◽  
Author(s):  
M. Peña-Cabrera ◽  
◽  
V. Lomas-Barrie ◽  
I. López-Juárez ◽  
R. Osorio-Comparán ◽  
...  

The article presents a method for obtaining the contour of an object in real time from non-binarized images for recognition purpose. The contour information is integrated into a descriptive vector named BOF used by a FuzzyARTMAP Artificial Neural Network (ANN) model to learn the object and then recognize it later. In this way, it is possible to obtain a learning process regarding the location and recognition of parts; to communicate to a robot arm the position and orientation information of an object for assembly purposes. Other method to obtain contour using binarized images, is compared with the described method in this paper in order to implement and test both in a Field Programmable Gate Array (FPGA) architecture. Since an ANN can be implemented more efficiently in a parallel structure such as FPGA architecture can supply, it is desirable to implement an efficient algorithm for obtaining the object contour in the same way.


2018 ◽  
Vol 7 (2.7) ◽  
pp. 642
Author(s):  
V Appala Raju ◽  
P Vasundhara ◽  
V ChandraKanth Reddy ◽  
A Sai Aiswarya

This paper deals with the methods performing state estimation .that is position and orientation of Unmanned Arial Vehicle (UAV) using GPS, gyro, accelerometers and magnetometer sensors. Various methods are designed for position and orientation measurements of UAV. In this paper we proposed extended kalman filter based inertial navigation system using quaternions and 3D magnetometer. Initially we load UAV truth data from a file ,generate noisy UAV sensor measurements and perform UAV state estimation and display UAV state estimate results with proposed method compares with previously exited method extended  kalman filter based altitude and heading reference system using quaternion and 3D magnetometer simulation .Results shows that EKF-INS method gives better position and orientation of UAV.  


PLoS ONE ◽  
2019 ◽  
Vol 14 (7) ◽  
pp. e0220502
Author(s):  
Jeppe H. Christensen ◽  
Peter J. Bex ◽  
József Fiser

2018 ◽  
Vol 51 (9-10) ◽  
pp. 431-442 ◽  
Author(s):  
Yang Bo ◽  
Wang Yue-gang ◽  
Xue Liang ◽  
Shan Bin ◽  
Wang Bao-cheng

In order to realize maneuver combat in the modern warfare, some special military vehicles require the ability of determining their position and orientation rapidly and accurately, and the position and orientation system should be highly autonomous and have strong anti-jamming capability. So a high-accuracy independent position and orientation method for vehicles that utilizes strapdown inertial navigation system/Doppler radar is presented in this article. Laser gyroscopes in strapdown inertial navigation system and Doppler radar are adopted to develop a dead-reckoning system for vehicles. Subsequently, the attitude, velocity,and position-updating algorithms of dead-reckoning system are designed. The error sources of dead-reckoning system are analyzed to establish the system error model, including the attitude error equations of the mathematical platform, velocity error equations, and position error equations. The errors of strapdown inertial navigation system and deadreckoning system are selected as system states of the integrated position and orientation method. The difference between the attitude output of strapdown inertial navigation system and that of dead-reckoning system, and the difference between the position output of strapdown inertial navigation system and that of dead-reckoning system are chosen as the measurements of integrated position and orientation. Then, Kalman filter is adopted to design the filtering algorithm of integrated position and orientation. In the end, the integrated position and orientation method is validated by simulation experiment and vehicular experiment. The experimental results show that strapdown inertial navigation system/Doppler radar integration can realize accurate positioning and orientation for a long time, and the accuracy of attitude/position integration mode is significantly higher than that of velocity/position integration mode. Therefore, the former integration mode is more suitable for accurate position and orientation for vehicles.


2010 ◽  
Vol 66 (suppl_2) ◽  
pp. ons342-ons353 ◽  
Author(s):  
Eiji Ito ◽  
Masazumi Fujii ◽  
Yuichiro Hayashi ◽  
Jiang Zhengang ◽  
Tetsuya Nagatani ◽  
...  

Abstract OBJECTIVE The authors have developed a novel intraoperative neuronavigation with 3-dimensional (3D) virtual images, a 3D virtual navigation system, for neuroendoscopic surgery. The present study describes this technique and clinical experience with the system. METHODS Preoperative imaging data sets were transferred to a personal computer to construct virtual endoscopic views with image segmentation software. An electromagnetic tracker was used to acquire the position and orientation of the tip of the neuroendo-scope. Virtual endoscopic images were interlinked to an electromagnetic tracking system and demonstrated on the navigation display in real time. Accuracy and efficacy of the 3D virtual navigation system were evaluated in a phantom test and on 5 consecutive patients undergoing neuroendoscopic surgery. RESULTS Virtual navigation views were consistent with actual endoscopic views and trajectory in both phantom testing and clinical neuroendoscopic surgery. Anatomic structures that can affect surgical approaches were adequately predicted with the virtual navigation system. The virtual semitransparent view contributed to a clear understanding of spatial relationships between surgical targets and surrounding structures. Surgical procedures in all patients were performed while confirming with virtual navigation. In neurosurgery with a flexible neuroscope, virtual navigation also demonstrated anatomic structures in real time. CONCLUSION The interactive method of intraoperative visualization influenced the decision-making process during surgery and provided useful assistance in identifying safe approaches for neuroendoscopic surgery. The magnetically guided navigation system enabled navigation of surgical targets in both rigid and flexible endoscopic surgeries.


1999 ◽  
Vol 11 (1) ◽  
pp. 45-53 ◽  
Author(s):  
Shinji Kotani ◽  
◽  
Ken’ichi Kaneko ◽  
Tatsuya Shinoda ◽  
Hideo Mori ◽  
...  

This paper describes a navigation system for an autonomous mobile robot in outdoors. The robot uses vision to detect landmarks and DGPS information to determine its initial position and orientation. The vision system detects landmarks in the environment by referring to an environmental model. As the robot moves, it calculates its position by conventional dead reckoning, and matches landmarks to the environmental model to reduce error in position calculation. The robot's initial position and orientation are calculated from coordinates of the first and second locations acquired by DGPS. Subsequent orientations and positions are derived by map matching. We implemented the system on a mobile robot, Harunobu 6. Experiments in real environments verified the effectiveness of our proposed navigation.


PLoS ONE ◽  
2019 ◽  
Vol 14 (2) ◽  
pp. e0212141 ◽  
Author(s):  
Jeppe H. Christensen ◽  
Peter J. Bex ◽  
József Fiser

Sign in / Sign up

Export Citation Format

Share Document