3D vision guided stove picking based on multi-channel image fusion in complex environment

2021 ◽  
Author(s):  
Chengwu Yang ◽  
Lingbo Meng ◽  
Yabin Li ◽  
Xiaotian Zhang ◽  
Kunbo Zhang
2012 ◽  
Vol 57 (5) ◽  
Author(s):  
Matthew Jian-qiao Peng ◽  
Wei-qiang Yin ◽  
Xiangyang Ju ◽  
Ashraf F. Ayoub ◽  
Balvinder S. Khambay ◽  
...  

AbstractBecause there is no complete three-dimensional (3D) hybrid detector integrated PET+MRI internationally, this study aims to investigate a registration approach for a two-dimensional (2D) hybrid based on characteristic localization to achieve a 3D fusion from the images of PET and MRI as a whole.A cubic-oriented scheme of “9-point and 3-plane” for a coregistration design was verified to be geometrically practical. Through 3D reconstruction and virtual dissection, human internal feature points were sorted to combine with preselected external feature points for matching process. By following the procedure of feature extraction and image mapping, the processes of “picking points to form planes” and “picking planes for segmentation” were executed. Eventually, the image fusion was implemented at the real-time workstation Mimics based on auto-fuse techniques called “information exchange” and “signal overlay”.A complementary 3D image across PET+MRI modalities, which simultaneously present metabolic activities and anatomic structures, was created with a detectable rate of 56%. This is equivalent to the detectable rate of PET+CT or MRI+CT with no statistically significant difference, and it facilitates a 3D vision that is not yet functional for 2D hybrid imaging.This cross-modality fusion is doubtless an essential complement for the existing toolkit of a 2D hybrid device. Thus, it would potentially improve the efficiency of diagnosis and therapy for oncology.


2020 ◽  
Vol 2020 ◽  
pp. 1-17
Author(s):  
Yubin Yuan ◽  
Yu Shen ◽  
Jing Peng ◽  
Lin Wang ◽  
Hongguo Zhang

Since the method to remove fog from images is complicated and detail loss and color distortion could occur to the defogged images, a defogging method based on near-infrared and visible image fusion is put forward in this paper. The algorithm in this paper uses the near-infrared image with rich details as a new data source and adopts the image fusion method to obtain a defog image with rich details and high color recovery. First, the colorful visible image is converted into HSI color space to obtain an intensity channel image, color channel image, and saturation channel image. The intensity channel image is fused with a near-infrared image and defogged, and then it is decomposed by Nonsubsampled Shearlet Transform. The obtained high-frequency coefficient is filtered by preserving the edge with a double exponential edge smoothing filter, while low-frequency antisharpening masking treatment is conducted on the low-frequency coefficient. The new intensity channel image could be obtained based on the fusion rule and by reciprocal transformation. Then, in color treatment of the visible image, the degradation model of the saturation image is established, which estimates the parameters based on the principle of dark primary color to obtain the estimated saturation image. Finally, the new intensity channel image, the estimated saturation image, and the primary color image are reflected to RGB space to obtain the fusion image, which is enhanced by color and sharpness correction. In order to prove the effectiveness of the algorithm, the dense fog image and the thin fog image are compared with the popular single image defogging and multiple image defogging algorithms and the visible light-near infrared fusion defogging algorithm based on deep learning. The experimental results show that the proposed algorithm is better in improving the edge contrast and the visual sharpness of the image than the existing high-efficiency defogging method.


2005 ◽  
Vol 173 (4S) ◽  
pp. 414-414
Author(s):  
Frank G. Fuechsel ◽  
Agostino Mattei ◽  
Sebastian Warncke ◽  
Christian Baermann ◽  
Ernst Peter Ritter ◽  
...  

2004 ◽  
Vol 43 (03) ◽  
pp. 85-90 ◽  
Author(s):  
E. Lopez Hänninen ◽  
Th. Steinmüller ◽  
T. Rohlfing ◽  
H. Bertram ◽  
M. Gutberlet ◽  
...  

Summary Aim: Minimally invasive resection of hyperfunctional parathyroid glands is an alternative to open surgery. However, it requires a precise preoperative localization. This study evaluated the diagnostic use of magnetic resonance (MR) imaging, parathyroid scintigraphy, and consecutive image fusion. Patients, methods: 17 patients (9 women, 8 men; age: 29-72 years; mean: 51.2 years) with primary hyperparathyroidism were included. Examination by MRI used unenhanced T1- and T2-weighted sequences as well as contrast-enhanced T1-weighted sequences. 99mTc-MIBI scintigraphy consisted of planar and SPECT (single photon emission tomography) imaging techniques. In order to improve the anatomical localization of a scintigraphic focus, SPECT-data were fused with the corresponding MR-data using a modified version of the Express 5.0 software (Advanced Visual Systems, Waltham, MA). Results of image fusion were then compared to histopathology. Results: In 14/17 patients, a single parathyroid adenoma was found. There were 3 cases with hyperplastic glands. MRI detected 10 (71%), scintigraphy 12 (86%) adenomas. Both modalities detected 1/3 patients with hyperplasia. Image fusion improved the anatomical assignment of the 13 scintigraphic foci in five patients and was helpful in the interpretation of inconclusive MR-findings in two patients. Conclusions: Both MRI and 99mTc-MIBI scintigraphy sensitively detect parathyroid adenomas but are less reliable in case of hyperplastic glands. In case of a scintigraphic focus, image fusion considerably improves its topographic assignment. Furthermore, it facilitates the evaluation of inconclusive MRI findings.


Sign in / Sign up

Export Citation Format

Share Document