scholarly journals Research on an Infrared Multi-Target Saliency Detection Algorithm under Sky Background Conditions

Sensors ◽  
2020 ◽  
Vol 20 (2) ◽  
pp. 459
Author(s):  
Shaosheng Dai ◽  
Dongyang Li

Aiming at solving the problem of incomplete saliency detection and unclear boundaries in infrared multi-target images with different target sizes and low signal-to-noise ratio under sky background conditions, this paper proposes a saliency detection method for multiple targets based on multi-saliency detection. The multiple target areas of the infrared image are mainly bright and the background areas are dark. Combining with the multi-scale top hat (Top-hat) transformation, the image is firstly corroded and expanded to extract the subtraction of light and shade parts and reconstruct the image to reduce the interference of sky blurred background noise. Then the image obtained by a multi-scale Top-hat transformation is transformed from the time domain to the frequency domain, and the spectral residuals and phase spectrum are extracted directly to obtain two kinds of image saliency maps by multi-scale Gauss filtering reconstruction, respectively. On the other hand, the quaternion features are extracted directly to transform the phase spectrum, and then the phase spectrum is reconstructed to obtain one kind of image saliency map by the Gauss filtering. Finally, the above three saliency maps are fused to complete the saliency detection of infrared images. The test results show that after the experimental analysis of infrared video photographs and the comparative analysis of Receiver Operating Characteristic (ROC) curve and Area Under the Curve (AUC) index, the infrared image saliency map generated by this method has clear target details and good background suppression effect, and the AUC index performance is good, reaching over 99%. It effectively improves the multi-target saliency detection effect of the infrared image under the sky background and is beneficial to subsequent detection and tracking of image targets.

2021 ◽  
Vol 1 (1) ◽  
pp. 31-45
Author(s):  
Muhammad Amir Shafiq ◽  
◽  
Zhiling Long ◽  
Haibin Di ◽  
Ghassan AlRegib ◽  
...  

<abstract><p>A new approach to seismic interpretation is proposed to leverage visual perception and human visual system modeling. Specifically, a saliency detection algorithm based on a novel attention model is proposed for identifying subsurface structures within seismic data volumes. The algorithm employs 3D-FFT and a multi-dimensional spectral projection, which decomposes local spectra into three distinct components, each depicting variations along different dimensions of the data. Subsequently, a novel directional center-surround attention model is proposed to incorporate directional comparisons around each voxel for saliency detection within each projected dimension. Next, the resulting saliency maps along each dimension are combined adaptively to yield a consolidated saliency map, which highlights various structures characterized by subtle variations and relative motion with respect to their neighboring sections. A priori information about the seismic data can be either embedded into the proposed attention model in the directional comparisons, or incorporated into the algorithm by specifying a template when combining saliency maps adaptively. Experimental results on two real seismic datasets from the North Sea, Netherlands and Great South Basin, New Zealand demonstrate the effectiveness of the proposed algorithm for detecting salient seismic structures of different natures and appearances in one shot, which differs significantly from traditional seismic interpretation algorithms. The results further demonstrate that the proposed method outperforms comparable state-of-the-art saliency detection algorithms for natural images and videos, which are inadequate for seismic imaging data.</p></abstract>


2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Xin Wang ◽  
Chunyan Zhang ◽  
Chen Ning ◽  
Yuzhen Zhang ◽  
Guofang Lv

For infrared images, it is a formidable challenge to highlight salient regions completely and suppress the background noise effectively at the same time. To handle this problem, a novel saliency detection method based on multiscale local sparse representation and local contrast measure is proposed in this paper. The saliency detection problem is implemented in three stages. First, a multiscale local sparse representation based approach is designed for detecting saliency in infrared images. Using it, multiple saliency maps with various scales are obtained for an infrared image. These maps are then fused to generate a combined saliency map, which can highlight the salient region fully. Second, we adopt a local contrast measure based technique to process the infrared image. It divides the image into a number of image blocks. Then these blocks are utilized to calculate the local contrast to generate a local contrast measure based saliency map. In this map, the background noise can be suppressed effectually. Last, to make full use of the advantages of the above two saliency maps, we propose combining them together using an adaptive fusion scheme. Experimental results show that our method achieves better performance than several state-of-the-art algorithms for saliency detection in infrared images.


2014 ◽  
Vol 701-702 ◽  
pp. 348-351
Author(s):  
Gang Hou ◽  
He Xin Yan ◽  
Fan Zhang ◽  
Hui Rong Hou ◽  
Ming Zhang

In recent years, saliency detection has been gaining increasing attention since it could significantly boost many content-based multimedia applications. In this paper, we propose a visual saliency detection algorithm based on multi-scale superpixel and dictionary learning . Firstly, in each scale space, we extract the boundaries as the training samples to learn a dictionary through sparse coding and dictionary learning methods. Then, according to reconstruction error of each superpixel, the saliency map is generated for each scale of superpixel. Finally, some saliency maps from different scale spaces are fused together to generate the final saliency map. The experimental results show that the proposed algorithm can highlight the salient regions uniformly and performs better compared with the other five methods.


Sensors ◽  
2019 ◽  
Vol 19 (2) ◽  
pp. 421 ◽  
Author(s):  
Mian Fareed ◽  
Qi Chun ◽  
Gulnaz Ahmed ◽  
Adil Murtaza ◽  
Muhammad Asif ◽  
...  

Image saliency detection is a very helpful step in many computer vision-based smart systems to reduce the computational complexity by only focusing on the salient parts of the image. Currently, the image saliency is detected through representation-based generative schemes, as these schemes are helpful for extracting the concise representations of the stimuli and to capture the high-level semantics in visual information with a small number of active coefficients. In this paper, we propose a novel framework for salient region detection that uses appearance-based and regression-based schemes. The framework segments the image and forms reconstructive dictionaries from four sides of the image. These side-specific dictionaries are further utilized to obtain the saliency maps of the sides. A unified version of these maps is subsequently employed by a representation-based model to obtain a contrast-based salient region map. The map is used to obtain two regression-based maps with LAB and RGB color features that are unified through the optimization-based method to achieve the final saliency map. Furthermore, the side-specific reconstructive dictionaries are extracted from the boundary and the background pixels, which are enriched with geometrical and visual information. The approach has been thoroughly evaluated on five datasets and compared with the seven most recent approaches. The simulation results reveal that our model performs favorably in comparison with the current saliency detection schemes.


Author(s):  
Lai Jiang ◽  
Zhe Wang ◽  
Mai Xu ◽  
Zulin Wang

The transformed domain fearures of images show effectiveness in distinguishing salient and non-salient regions. In this paper, we propose a novel deep complex neural network, named SalDCNN, to predict image saliency by learning features in both pixel and transformed domains. Before proposing Sal-DCNN, we analyze the saliency cues encoded in discrete Fourier transform (DFT) domain. Consequently, we have the following findings: 1) the phase spectrum encodes most saliency cues; 2) a certain pattern of the amplitude spectrum is important for saliency prediction; 3) the transformed domain spectrum is robust to noise and down-sampling for saliency prediction. According to these findings, we develop the structure of SalDCNN, including two main stages: the complex dense encoder and three-stream multi-domain decoder. Given the new SalDCNN structure, the saliency maps can be predicted under the supervision of ground-truth fixation maps in both pixel and transformed domains. Finally, the experimental results show that our Sal-DCNN method outperforms other 8 state-of-theart methods for image saliency prediction on 3 databases.


2016 ◽  
Vol 2016 ◽  
pp. 1-18 ◽  
Author(s):  
Qiangqiang Zhou ◽  
Weidong Zhao ◽  
Lin Zhang ◽  
Zhicheng Wang

Saliency detection is an important preprocessing step in many application fields such as computer vision, robotics, and graphics to reduce computational cost by focusing on significant positions and neglecting the nonsignificant in the scene. Different from most previous methods which mainly utilize the contrast of low-level features, various feature maps are fused in a simple linear weighting form. In this paper, we propose a novel salient object detection algorithm which takes both background and foreground cues into consideration and integrate a bottom-up coarse salient regions extraction and a top-down background measure via boundary labels propagation into a unified optimization framework to acquire a refined saliency detection result. Wherein the coarse saliency map is also fused by three components, the first is local contrast map which is in more accordance with the psychological law, the second is global frequency prior map, and the third is global color distribution map. During the formation of background map, first we construct an affinity matrix and select some nodes which lie on border as labels to represent the background and then carry out a propagation to generate the regional background map. The evaluation of the proposed model has been implemented on four datasets. As demonstrated in the experiments, our proposed method outperforms most existing saliency detection models with a robust performance.


Author(s):  
Liming Li ◽  
Xiaodong Chai ◽  
Shuguang Zhao ◽  
Shubin Zheng ◽  
Shengchao Su

This paper proposes an effective method to elevate the performance of saliency detection via iterative bootstrap learning, which consists of two tasks including saliency optimization and saliency integration. Specifically, first, multiscale segmentation and feature extraction are performed on the input image successively. Second, prior saliency maps are generated using existing saliency models, which are used to generate the initial saliency map. Third, prior maps are fed into the saliency regressor together, where training samples are collected from the prior maps at multiple scales and the random forest regressor is learned from such training data. An integration of the initial saliency map and the output of saliency regressor is deployed to generate the coarse saliency map. Finally, in order to improve the quality of saliency map further, both initial and coarse saliency maps are fed into the saliency regressor together, and then the output of the saliency regressor, the initial saliency map as well as the coarse saliency map are integrated into the final saliency map. Experimental results on three public data sets demonstrate that the proposed method consistently achieves the best performance and significant improvement can be obtained when applying our method to existing saliency models.


2011 ◽  
Vol 403-408 ◽  
pp. 1927-1932
Author(s):  
Hai Peng ◽  
Hua Jun Feng ◽  
Ju Feng Zhao ◽  
Zhi Hai Xu ◽  
Qi Li ◽  
...  

We propose a new image fusion method to fuse the frames of infrared and visual image sequences more effectively. In our method, we introduce an improved salient feature detection algorithm to achieve the saliency map of the original frames. This improved method can detect not only spatially but also temporally salient features using dynamic information of inter-frames. Images are then segmented into target regions and background regions based on saliency distribution. We formulate fusion rules for different regions using a double threshold method and finally fuse the image frames in NSCT multi-scale domain. Comparison of different methods shows that our result is a more effective one to stress salient features of target regions and maintain details of background regions from the original image sequences.


Sign in / Sign up

Export Citation Format

Share Document