Interframe registration of IVUS images due to catheter rotation with feature-based optical flow

Author(s):  
Mikhail G. Danilouchkine ◽  
Frits Mastik ◽  
Antonius F. W. van der Steen
Keyword(s):  
Author(s):  
R. Feng ◽  
X. Li ◽  
H. Shen

<p><strong>Abstract.</strong> Mountainous remote sensing images registration is more complicated than in other areas as geometric distortion caused by topographic relief, which could not be precisely achieved via constructing local mapping functions in the feature-based framework. Optical flow algorithm estimating motion of consecutive frames in computer vision pixel by pixel is introduced for mountainous remote sensing images registration. However, it is sensitive to land cover changes that are inevitable for remote sensing image, resulting in incorrect displacement. To address this problem, we proposed an improved optical flow estimation concentrated on post-processing, namely displacement modification. First of all, the Laplacian of Gaussian (LoG) algorithm is employed to detect the abnormal value in color map of displacement. Then, the abnormal displacement is recalculated in the interpolation surface constructed by the rest accurate displacements. Following the successful coordinate transformation and resampling, the registration outcome is generated. Experiments demonstrated that the proposed method is insensitive in changeable region of mountainous remote sensing image, generating precise registration, outperforming the other local transformation model estimation methods in both visual judgment and quantitative evaluation.</p>


2020 ◽  
Vol 37 (12) ◽  
pp. 1958
Author(s):  
Wataru Suzuki ◽  
Atsushi Hiyama ◽  
Noritaka Ichinohe ◽  
Wakayo Yamashita ◽  
Takeharu Seno ◽  
...  

Robotica ◽  
2014 ◽  
Vol 34 (9) ◽  
pp. 1923-1947 ◽  
Author(s):  
Salam Dhou ◽  
Yuichi Motai

SUMMARYAn efficient method for tracking a target using a single Pan-Tilt-Zoom (PTZ) camera is proposed. The proposed Scale-Invariant Optical Flow (SIOF) method estimates the motion of the target and rotates the camera accordingly to keep the target at the center of the image. Also, SIOF estimates the scale of the target and changes the focal length relatively to adjust the Field of View (FoV) and keep the target appear in the same size in all captured frames. SIOF is a feature-based tracking method. Feature points used are extracted and tracked using Optical Flow (OF) and Scale-Invariant Feature Transform (SIFT). They are combined in groups and used to achieve robust tracking. The feature points in these groups are used within a twist model to recover the 3D free motion of the target. The merits of this proposed method are (i) building an efficient scale-invariant tracking method that tracks the target and keep it in the FoV of the camera with the same size, and (ii) using tracking with prediction and correction to speed up the PTZ control and achieve smooth camera control. Experimental results were performed on online video streams and validated the efficiency of the proposed method SIOF, comparing with OF, SIFT, and other tracking methods. The proposed SIOF has around 36% less average tracking error and around 70% less tracking overshoot than OF.


2021 ◽  
Vol 13 (8) ◽  
pp. 1475
Author(s):  
Ruitao Feng ◽  
Qingyun Du ◽  
Huanfeng Shen ◽  
Xinghua Li

While geometric registration has been studied in remote sensing community for many decades, successful cases are rare, which register images allowing for local inconsistency deformation caused by topographic relief. Toward this end, a region-by-region registration combining the feature-based and optical flow methods is proposed. The proposed framework establishes on the calculation of pixel-wise displacement and mosaic of displacement fields. Concretely, the initial displacement fields for a pair of images are calculated by the block-weighted projective model and Brox optical flow estimation, respectively in the flat- and complex-terrain regions. The abnormal displacements resulting from the sensitivity of optical flow in the land use or land cover changes, are adaptively detected and corrected by the weighted Taylor expansion. Subsequently, the displacement fields are mosaicked seamlessly for subsequent steps. Experimental results show that the proposed method outperforms comparative algorithms, achieving the highest registration accuracy qualitatively and quantitatively.


2020 ◽  
Vol 12 ◽  
pp. 184797902098092
Author(s):  
Hyunchul Lee ◽  
Sungmin Lee ◽  
Okkyung Choi

With the rapid development of technologies based on virtual reality, image stitching is widely used in various fields such as broadcasting, games, education, and architecture. Image stitching is a method for connecting multiple images to produce a high-resolution image and a wide field of view image. It is common for most of the stitching methods to find and match the feature in the image. However, these stitching methods have the disadvantage that they cannot create a perfect 360-degree panoramic image because the depth of the projected area varies depending on the position and direction between adjacent cameras. Therefore, we propose an advanced stitching method to improve the deviation due to the difference in the depth of each area using the pixel value of the input image after the feature-based stitching. After the feature-based stitching method has been performed, the pixel values of overlapping areas in the image are calculated as an optical flow algorithm, then finely distorted, and then corrected to ensure that the image overlaps correctly. Through experiments, it was confirmed that the problem that was deviated from the feature-based stitching was solved. Besides, as a result of performance evaluation, it was proved that the proposed stitching method using an optical flow algorithm is capable of real-time and fast service.


Sign in / Sign up

Export Citation Format

Share Document