scholarly journals Trajectory Generation and Object Tracking of Mobile Robot Using Multiple Image Fusion

10.5772/6576 ◽  
2009 ◽  
Author(s):  
TaeSeok Jin ◽  
Hideki Hashimoto
Author(s):  
Weidong Wang ◽  
Chengjin Du ◽  
Zhijiang Du

Purpose This paper aims to present a prototype of medical transportation robot whose positioning accuracy can reach millimeter-level in terms of patient transportation. By using this kind of mobile robot, a fully automatic image diagnosis process among independent CT/PET devices and the image fusion can be achieved. Design/methodology/approach Following a short introduction, a large-load 4WD-4WS (four-wheel driving and four-wheel steering) mobile robot for carrying patient among multiple medical imaging equipments is developed. At the same time, a specially designed bedplate with self-locking function is also introduced. For further improving the positioning accuracy, the authors proposed a calibration method based on Gaussian process regression (GPR) to process the measuring data of the sensors. The performance of this robot is verified by the calibration experiment and Image fusion experiment. Finally, concluding comments are drawn. Findings By calibrating the robot’s positioning system through the proposed GPR method, one can obtain the accuracy of the robot’s offset distance and deflection angle, which are 0.50 mm and +0.21°, respectively. Independent repeated trials were then set up to verify this result. Subsequent phantom experiment shows the accuracy of image fusion can be accurate within 0.57 mm in the front-rear direction and 0.83 in the left-right direction, respectively, while the clinical experiment shows that the proposed robot can practically realize the transportation of patient and image fusion between multiple imaging diagnosis devices. Practical implications The proposed robot offers an economical image fusion solution for medical institutions whose imaging diagnosis system basically comprises independent MRI, CT and PET devices. Also, a fully automatic diagnosis process can be achieved so that the patient’s suffering of getting in and out of the bed and the doctor’s radiation dose can be obviated. Social implications The general bedplate presented in Section 2 that can be mounted on the CT and PET devices and the self-locking mechanism has realized the catching and releasing motion of the patient on different medical devices. They also provide a detailed method regarding patient handling and orientation maintenance, which was hardly mentioned in previous research. By establishing the positioning system between the robot and different medical equipment, a fully automatic diagnosis process can be achieved so that the patient’s suffering of getting in and out of the bed and the doctor’s radiation dose can be obviated. Originality/value The GPR-based method proposed in this paper offers a novel method for enhancing the positioning accuracy of the industrial AGV while the transportation robot proposed in this paper also offers a solution for modern imaging fusion diagnosis, which are basically predicated on the conjoint analysis between different kinds of medical devices.


2020 ◽  
Vol 34 (01) ◽  
pp. 759-766
Author(s):  
Jing Li ◽  
Jing Xu ◽  
Fangwei Zhong ◽  
Xiangyu Kong ◽  
Yu Qiao ◽  
...  

Active Object Tracking (AOT) is crucial to many vision-based applications, e.g., mobile robot, intelligent surveillance. However, there are a number of challenges when deploying active tracking in complex scenarios, e.g., target is frequently occluded by obstacles. In this paper, we extend the single-camera AOT to a multi-camera setting, where cameras tracking a target in a collaborative fashion. To achieve effective collaboration among cameras, we propose a novel Pose-Assisted Multi-Camera Collaboration System, which enables a camera to cooperate with the others by sharing camera poses for active object tracking. In the system, each camera is equipped with two controllers and a switcher: The vision-based controller tracks targets based on observed images. The pose-based controller moves the camera in accordance to the poses of the other cameras. At each step, the switcher decides which action to take from the two controllers according to the visibility of the target. The experimental results demonstrate that our system outperforms all the baselines and is capable of generalizing to unseen environments. The code and demo videos are available on our website https://sites.google.com/view/pose-assisted-collaboration.


Sign in / Sign up

Export Citation Format

Share Document