scholarly journals Salient Region Detection by Fusing Foreground and Background Cues Extracted from Single Image

2016 ◽  
Vol 2016 ◽  
pp. 1-18 ◽  
Author(s):  
Qiangqiang Zhou ◽  
Weidong Zhao ◽  
Lin Zhang ◽  
Zhicheng Wang

Saliency detection is an important preprocessing step in many application fields such as computer vision, robotics, and graphics to reduce computational cost by focusing on significant positions and neglecting the nonsignificant in the scene. Different from most previous methods which mainly utilize the contrast of low-level features, various feature maps are fused in a simple linear weighting form. In this paper, we propose a novel salient object detection algorithm which takes both background and foreground cues into consideration and integrate a bottom-up coarse salient regions extraction and a top-down background measure via boundary labels propagation into a unified optimization framework to acquire a refined saliency detection result. Wherein the coarse saliency map is also fused by three components, the first is local contrast map which is in more accordance with the psychological law, the second is global frequency prior map, and the third is global color distribution map. During the formation of background map, first we construct an affinity matrix and select some nodes which lie on border as labels to represent the background and then carry out a propagation to generate the regional background map. The evaluation of the proposed model has been implemented on four datasets. As demonstrated in the experiments, our proposed method outperforms most existing saliency detection models with a robust performance.

2014 ◽  
Vol 2014 ◽  
pp. 1-7 ◽  
Author(s):  
Mengmeng Zhang ◽  
Zhi Liu ◽  
Huan Zhou ◽  
Jian Wang

Image saliency detection has become increasingly important with the development of intelligent identification and machine vision technology. This process is essential for many image processing algorithms such as image retrieval, image segmentation, image recognition, and adaptive image compression. We propose a salient region detection algorithm for full-resolution images. This algorithm analyzes the randomness and correlation of image pixels and pixel-to-region saliency computation mechanism. The algorithm first obtains points with more saliency probability by using the improved smallest univalue segment assimilating nucleus operator. It then reconstructs the entire saliency region detection by taking these points as reference and combining them with image spatial color distribution, as well as regional and global contrasts. The results for subjective and objective image saliency detection show that the proposed algorithm exhibits outstanding performance in terms of technology indices such as precision and recall rates.


2018 ◽  
Vol 8 (12) ◽  
pp. 2526 ◽  
Author(s):  
Huiyuan Luo ◽  
Guangliang Han ◽  
Peixun Liu ◽  
Yanfeng Wu

Diffusion-based salient region detection methods have gained great popularity. In most diffusion-based methods, the saliency values are ranked on 2-layer neighborhood graph by connecting each node to its neighboring nodes and the nodes sharing common boundaries with its neighboring nodes. However, only considering the local relevance between neighbors, the salient region may be heterogeneous and even wrongly suppressed, especially when the features of salient object are diverse. In order to address the issue, we present an effective saliency detection method using diffusing process on the graph with nonlocal connections. First, a saliency-biased Gaussian model is used to refine the saliency map based on the compactness cue, and then, the saliency information of compactness is diffused on 2-layer sparse graph with nonlocal connections. Second, we obtain the contrast of each superpixel by restricting the reference region to the background. Similarly, a saliency-biased Gaussian refinement model is generated and the saliency information based on the uniqueness cue is propagated on the 2-layer sparse graph. We linearly integrate the initial saliency maps based on the compactness and uniqueness cues due to the complementarity to each other. Finally, to obtain a highlighted and homogeneous saliency map, a single-layer updating and multi-layer integrating scheme is presented. Comprehensive experiments on four benchmark datasets demonstrate that the proposed method performs better in terms of various evaluation metrics.


Author(s):  
GUANGHUA TAN ◽  
JUN QI ◽  
CHUNMING GAO ◽  
JIN CHEN ◽  
LIYUAN ZHUO

Spectral matting is the state-of-the-art matting method and can well solve the highly under-conditioned matte problem without manual intervention. However, it suffers from huge computation cost and inaccurate alpha matte. This paper presents a modified spectral matting method which combines saliency detection algorithm to get a higher accuracy of alpha matte with less computational cost. First, the saliency detection algorithm is used to detect general locations of foreground objects. For saliency detection method, original two-stage scheme is replaced by feedback scheme to get a more suitable saliency map for unsupervised image matting. Next, matting components are obtained through a linear transformation of the smallest eigenvectors of the matting Laplacian matrix. Then, the improved saliency map is used for grouping matting components. Finally, the alpha matte is obtained based on matte cost function. Experiments show that the proposed method outperforms the state-of-the-art methods based on spectral matting both in speed and alpha matte accuracy.


Sensors ◽  
2019 ◽  
Vol 19 (2) ◽  
pp. 421 ◽  
Author(s):  
Mian Fareed ◽  
Qi Chun ◽  
Gulnaz Ahmed ◽  
Adil Murtaza ◽  
Muhammad Asif ◽  
...  

Image saliency detection is a very helpful step in many computer vision-based smart systems to reduce the computational complexity by only focusing on the salient parts of the image. Currently, the image saliency is detected through representation-based generative schemes, as these schemes are helpful for extracting the concise representations of the stimuli and to capture the high-level semantics in visual information with a small number of active coefficients. In this paper, we propose a novel framework for salient region detection that uses appearance-based and regression-based schemes. The framework segments the image and forms reconstructive dictionaries from four sides of the image. These side-specific dictionaries are further utilized to obtain the saliency maps of the sides. A unified version of these maps is subsequently employed by a representation-based model to obtain a contrast-based salient region map. The map is used to obtain two regression-based maps with LAB and RGB color features that are unified through the optimization-based method to achieve the final saliency map. Furthermore, the side-specific reconstructive dictionaries are extracted from the boundary and the background pixels, which are enriched with geometrical and visual information. The approach has been thoroughly evaluated on five datasets and compared with the seven most recent approaches. The simulation results reveal that our model performs favorably in comparison with the current saliency detection schemes.


Author(s):  
Yingchun Guo ◽  
Yanhong Feng ◽  
Gang Yan ◽  
Shuo Shi

Salient region detection is a challenge problem in computer vision, which is useful in image segmentation, region-based image retrieval, and so on. In this paper we present a multi-resolution salient region detection method in frequency domain which can highlight salient regions with well-defined boundaries of object. The original image is sub-sampled into three multi-resolution layers, and for each layer the luminance and color salient features are extracted in frequency domain. Then, the significant values are calculated by using invariant laws of Euclidean distance in Lab space and the normal distribution function is used to specify the salient map in each layer in order to remove noise and enhance the correlation among the vicinity pixels. The final saliency map is obtained by normalizing and merging the multi-resolution salient maps. Experimental evaluation depicts the promising results from the proposed model by outperforming the state-of-art frequency-tuned model.


Author(s):  
Rajkumar Kannan ◽  
Sridhar Swaminathan ◽  
Gheorghita Ghinea ◽  
Frederic Andres ◽  
Kalaiarasi Sonai Muthu Anbananthen

Video summarization condenses a video by extracting its informative and interesting segments. In this article, a novel video summarization approach is proposed based on spatiotemporal salient region detection. The proposed approach first segments a video into a set of shots which are ranked with spatiotemporal saliency scores. The score for a shot is computed by aggregating the frame level spatiotemporal saliency scores. This approach detects spatial and temporal salient regions separately using different saliency theories related to objects present in a visual scenario. The spatial saliency of a video frame is computed using color contrast and color distribution estimations and center prior integration. The temporal saliency of a video frame is estimated as an integration of local and global temporal saliencies computed using patch level optical flow abstractions. Finally, top ranked shots with the highest saliency scores are selected for generating the video summary. The objective and subjective experimental results demonstrate the efficacy of the proposed approach.


Sensors ◽  
2020 ◽  
Vol 20 (2) ◽  
pp. 459
Author(s):  
Shaosheng Dai ◽  
Dongyang Li

Aiming at solving the problem of incomplete saliency detection and unclear boundaries in infrared multi-target images with different target sizes and low signal-to-noise ratio under sky background conditions, this paper proposes a saliency detection method for multiple targets based on multi-saliency detection. The multiple target areas of the infrared image are mainly bright and the background areas are dark. Combining with the multi-scale top hat (Top-hat) transformation, the image is firstly corroded and expanded to extract the subtraction of light and shade parts and reconstruct the image to reduce the interference of sky blurred background noise. Then the image obtained by a multi-scale Top-hat transformation is transformed from the time domain to the frequency domain, and the spectral residuals and phase spectrum are extracted directly to obtain two kinds of image saliency maps by multi-scale Gauss filtering reconstruction, respectively. On the other hand, the quaternion features are extracted directly to transform the phase spectrum, and then the phase spectrum is reconstructed to obtain one kind of image saliency map by the Gauss filtering. Finally, the above three saliency maps are fused to complete the saliency detection of infrared images. The test results show that after the experimental analysis of infrared video photographs and the comparative analysis of Receiver Operating Characteristic (ROC) curve and Area Under the Curve (AUC) index, the infrared image saliency map generated by this method has clear target details and good background suppression effect, and the AUC index performance is good, reaching over 99%. It effectively improves the multi-target saliency detection effect of the infrared image under the sky background and is beneficial to subsequent detection and tracking of image targets.


2016 ◽  
Vol 2016 ◽  
pp. 1-10 ◽  
Author(s):  
Xiaochun Zou ◽  
Xinbo Zhao ◽  
Yongjia Yang ◽  
Na Li

This paper brings forth a learning-based visual saliency model method for detecting diagnostic diabetic macular edema (DME) regions of interest (RoIs) in retinal image. The method introduces the cognitive process of visual selection of relevant regions that arises during an ophthalmologist’s image examination. To record the process, we collected eye-tracking data of 10 ophthalmologists on 100 images and used this database as training and testing examples. Based on analysis, two properties (Feature Property and Position Property) can be derived and combined by a simple intersection operation to obtain a saliency map. The Feature Property is implemented by support vector machine (SVM) technique using the diagnosis as supervisor; Position Property is implemented by statistical analysis of training samples. This technique is able to learn the preferences of ophthalmologist visual behavior while simultaneously considering feature uniqueness. The method was evaluated using three popular saliency model evaluation scores (AUC, EMD, and SS) and three quality measurements (classical sensitivity, specificity, and Youden’sJstatistic). The proposed method outperforms 8 state-of-the-art saliency models and 3 salient region detection approaches devised for natural images. Furthermore, our model successfully detects the DME RoIs in retinal image without sophisticated image processing such as region segmentation.


2011 ◽  
Vol 403-408 ◽  
pp. 1927-1932
Author(s):  
Hai Peng ◽  
Hua Jun Feng ◽  
Ju Feng Zhao ◽  
Zhi Hai Xu ◽  
Qi Li ◽  
...  

We propose a new image fusion method to fuse the frames of infrared and visual image sequences more effectively. In our method, we introduce an improved salient feature detection algorithm to achieve the saliency map of the original frames. This improved method can detect not only spatially but also temporally salient features using dynamic information of inter-frames. Images are then segmented into target regions and background regions based on saliency distribution. We formulate fusion rules for different regions using a double threshold method and finally fuse the image frames in NSCT multi-scale domain. Comparison of different methods shows that our result is a more effective one to stress salient features of target regions and maintain details of background regions from the original image sequences.


Author(s):  
P. Santhiya ◽  
S. Selvi

Detecting visually salient regions in images is fundamental problems and it is useful for applications like image segmentation, adaptive compression, and object recognition. A salient object region is a soft decomposition of foreground and background image elements. To detect salient regions in an image in terms of the saliency maps. To create a saliency map by using a linear combination of colors in high-dimensional color space. To improve the performance of saliency estimation, utilize the relative location and color contrast between superpixels. To resolve the saliency estimation from trimap by using learning based algorithm. This is based on an examination that salient regions frequently have individual colors’ compared with backgrounds in human sensitivity however, human perception is complicated and extremely nonlinear. The tentative outcome on three benchmark datasets show that our approach is valuable in assessment with the prior state-of-the-art saliency estimation methods. Finally, salient region detection that outputs full resolution saliency map with well-defined boundaries of the salient object. 


Sign in / Sign up

Export Citation Format

Share Document