scholarly journals Fusion of Enhanced and Synthetic Vision System Images for Runway and Horizon Detection

Sensors ◽  
2019 ◽  
Vol 19 (17) ◽  
pp. 3802 ◽  
Author(s):  
Ahmed F. Fadhil ◽  
Raghuveer Kanneganti ◽  
Lalit Gupta ◽  
Henry Eberle ◽  
Ravi Vaidyanathan

Networked operation of unmanned air vehicles (UAVs) demands fusion of information from disparate sources for accurate flight control. In this investigation, a novel sensor fusion architecture for detecting aircraft runway and horizons as well as enhancing the awareness of surrounding terrain is introduced based on fusion of enhanced vision system (EVS) and synthetic vision system (SVS) images. EVS and SVS image fusion has yet to be implemented in real-world situations due to signal misalignment. We address this through a registration step to align EVS and SVS images. Four fusion rules combining discrete wavelet transform (DWT) sub-bands are formulated, implemented, and evaluated. The resulting procedure is tested on real EVS-SVS image pairs and pairs containing simulated turbulence. Evaluations reveal that runways and horizons can be detected accurately even in poor visibility. Furthermore, it is demonstrated that different aspects of EVS and SVS images can be emphasized by using different DWT fusion rules. The procedure is autonomous throughout landing, irrespective of weather. The fusion architecture developed in this study holds promise for incorporation into manned heads-up displays (HUDs) and UAV remote displays to assist pilots landing aircraft in poor lighting and varying weather. The algorithm also provides a basis for rule selection in other signal fusion applications.

Author(s):  
Ahmed F. Fadhil ◽  
Raghuveer Kanneganti ◽  
Lalit Gupta ◽  
Ravi Vaidyanathan

UAV network operation enables gathering and fusion from disparate information sources for flight control in both manned and unmanned platforms. In this investigation, a novel procedure for detecting runways and horizons as well as enhancing surrounding terrain is introduced based on fusion of enhanced vision system (EVS) and synthetic vision system (SVS) images. EVS and SVS image fusion has yet to be implemented real-world situations due to signal misalignment. We address this through a registration step to align the EVS and SVS images. Four fusion rules combining discrete wavelet transform (DWT) sub-bands are formulated, implemented and evaluated. The resulting procedure is tested on real EVS-SVS image pairs and pairs containing simulated turbulence. Evaluations reveal that runways and horizons can be detected accurately even in poor visibility. Furthermore, it is demonstrated that different aspects of the EVS and SVS images can be emphasized by using different DWT fusion rules. The procedure is autonomous throughout landing, irrespective of weather. We believe the fusion architecture developed holds promise for incorporation into head-up displays (HUDs) and UAV remote displays to assist pilots landing aircraft in poor lighting and varying weather. The algorithm also provided a basis rule selection in other signal fusion applications.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3494 ◽  
Author(s):  
Mei Yu ◽  
Kazhong Deng ◽  
Huachao Yang ◽  
Changbiao Qin

Image matching is an outstanding issue because of the existing of geometric and radiometric distortion in stereo remote sensing images. Weighted α-shape (WαSH) local invariant features are tolerant to image rotation, scale change, affine deformation, illumination change, and blurring. However, since the number of WαSH features is small, it is difficult to get enough matches to estimate the satisfactory homography matrix or fundamental matrix. In addition, the WαSH detector is extremely sensitive to image noise because it is built on sampled edges. Considering the shortcomings of the WαSH detector, this paper improves the WαSH feature matching method based on the 2D discrete wavelet transform (2D-DWT). The method firstly performs 2D-DWT on the image, and then detects WαSH features on the transformed images. According to the methods of descriptor construction for WαSH features, three matching methods on the basis of wavelet transform WαSH features (WWF), improved wavelet transform WαSH features (IWWF), and layered IWWF (LIWWF) are distinguished with respect to the character of the sub-images. The experimental results on the dataset containing affine distortion, scale distortion, illumination change, and noise images, showed that the proposed methods acquired more matches and better stableness than WαSH. Experimentation on remote sensing images with less affine distortion and slight noise showed that the proposed methods obtained the correct matching rate greater than 90%. For images containing severe distortion, KAZE obtained a 35.71% correct matching rate, which is unacceptable for calculating the homography matrix, while IWWF achieved a 71.42% correct matching rate. IWWF was the only method that achieved the correct matching rate of no less than 50% for all four test stereo remote sensing image pairs and was the most stable compared to MSER, DWT-MSER, WαSH, DWT-WαSH, KAZE, WWF, and LIWWF.


2016 ◽  
Vol 2016 ◽  
pp. 1-12 ◽  
Author(s):  
Yingzhong Tian ◽  
Jie Luo ◽  
Wenjun Zhang ◽  
Tinggang Jia ◽  
Aiguo Wang ◽  
...  

Multifocus image fusion is a process that integrates partially focused image sequence into a fused image which is focused everywhere, with multiple methods proposed in the past decades. The Dual Tree Complex Wavelet Transform (DTCWT) is one of the most precise ones eliminating two main defects caused by the Discrete Wavelet Transform (DWT). Q-shift DTCWT was proposed afterwards to simplify the construction of filters in DTCWT, producing better fusion effects. A different image fusion strategy based on Q-shift DTCWT is presented in this work. According to the strategy, firstly, each image is decomposed into low and high frequency coefficients, which are, respectively, fused by using different rules, and then various fusion rules are innovatively combined in Q-shift DTCWT, such as the Neighborhood Variant Maximum Selectivity (NVMS) and the Sum Modified Laplacian (SML). Finally, the fused coefficients could be well extracted from the source images and reconstructed to produce one fully focused image. This strategy is verified visually and quantitatively with several existing fusion methods based on a plenty of experiments and yields good results both on standard images and on microscopic images. Hence, we can draw the conclusion that the rule of NVMS is better than others after Q-shift DTCWT.


Multifocus image fusion is a current research topic in the area of image processing for visual sensor networks. Discrete wavelet transform based fusion algorithms suffer from unintended effects like smoothing of edges, loss of contrast and artifacts. To overcome these problems, Stationary Wavelet Transform based algorithm using fusion-rules is proposed and applied to multifocus images. Stationary Wavelet Transform well preserves the edges and avoid artifacts with its shift-invariance property. Entropy and spatial frequency based fusion rules in this work can effectively characterize the intensity variations in an image there by loss of contrast is minimized. Simulation results show that the proposed method can amply preserve the edges and also avoid artifacts with no loss of contrast.


2014 ◽  
Vol 592-594 ◽  
pp. 801-805 ◽  
Author(s):  
Keerthi C. Ravi ◽  
P. Pai Srinivasa ◽  
J.S. Vishwanatha

This paper presents wavelet based recognition of the machined surfaces namely turned, ground and shaped surfaces from the images acquired using Computer Vision System. Selection of mother wavelet has been done based on the peak signal to noise ratio (PSNR) value using Discrete wavelet transform (DWT) which has been used for feature extraction. Artificial neural network has been used to recognize the machined surfaces.


2004 ◽  
Author(s):  
Michael D. Byrne ◽  
Alex Kirlik ◽  
Michael D. Fleetwood ◽  
David G. Huss ◽  
Alex Kosorukoff ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document