scholarly journals Military Object Real-Time Detection Technology Combined with Visual Salience and Psychology

Electronics ◽  
2018 ◽  
Vol 7 (10) ◽  
pp. 216 ◽  
Author(s):  
Xia Hua ◽  
Xinqing Wang ◽  
Dong Wang ◽  
Jie Huang ◽  
Xiaodong Hu

This paper presents a method of military object detection through the combination of human visual salience and visual psychology, so as to achieve rapid and accurate detection of military objects on the vast and complex battlefield. Inspired by the process of human visual information processing, this paper establishes a salient region detection model based on double channel and feature fusion. In this model the pre-attention channel is to process information on the position and contrast of images, and the sub-attention channel is to integrate information on primary visual features first and then merges results of the two channels to determine the salient region. The main theory of Gestalt visual psychology is then used as the constraint condition to integrate the candidate salient regions and to obtain the object figure with overall perception. After that, the efficient sub-window search method is used to detect and filter the object in order to determine the location and range of objects. The experimental results show that, when compared with the existing algorithms, the algorithm proposed in this paper has prominent advantages in precision, effectiveness, and simplicity, which not only significantly reduces the effectiveness of battlefield camouflage and deception but also achieves the rapid and accurate detection of military objects, thus promoting its application prospect.

Author(s):  
Yingchun Guo ◽  
Yanhong Feng ◽  
Gang Yan ◽  
Shuo Shi

Salient region detection is a challenge problem in computer vision, which is useful in image segmentation, region-based image retrieval, and so on. In this paper we present a multi-resolution salient region detection method in frequency domain which can highlight salient regions with well-defined boundaries of object. The original image is sub-sampled into three multi-resolution layers, and for each layer the luminance and color salient features are extracted in frequency domain. Then, the significant values are calculated by using invariant laws of Euclidean distance in Lab space and the normal distribution function is used to specify the salient map in each layer in order to remove noise and enhance the correlation among the vicinity pixels. The final saliency map is obtained by normalizing and merging the multi-resolution salient maps. Experimental evaluation depicts the promising results from the proposed model by outperforming the state-of-art frequency-tuned model.


Author(s):  
Rajkumar Kannan ◽  
Sridhar Swaminathan ◽  
Gheorghita Ghinea ◽  
Frederic Andres ◽  
Kalaiarasi Sonai Muthu Anbananthen

Video summarization condenses a video by extracting its informative and interesting segments. In this article, a novel video summarization approach is proposed based on spatiotemporal salient region detection. The proposed approach first segments a video into a set of shots which are ranked with spatiotemporal saliency scores. The score for a shot is computed by aggregating the frame level spatiotemporal saliency scores. This approach detects spatial and temporal salient regions separately using different saliency theories related to objects present in a visual scenario. The spatial saliency of a video frame is computed using color contrast and color distribution estimations and center prior integration. The temporal saliency of a video frame is estimated as an integration of local and global temporal saliencies computed using patch level optical flow abstractions. Finally, top ranked shots with the highest saliency scores are selected for generating the video summary. The objective and subjective experimental results demonstrate the efficacy of the proposed approach.


2018 ◽  
Vol 2018 ◽  
pp. 1-16
Author(s):  
Ye Liang ◽  
Congyan Lang ◽  
Jian Yu ◽  
Hongzhe Liu ◽  
Nan Ma

The popularity of social networks has brought the rapid growth of social images which have become an increasingly important image type. One of the most obvious attributes of social images is the tag. However, the sate-of-the-art methods fail to fully exploit the tag information for saliency detection. Thus this paper focuses on salient region detection of social images using both image appearance features and image tag cues. First, a deep convolution neural network is built, which considers both appearance features and tag features. Second, tag neighbor and appearance neighbor based saliency aggregation terms are added to the saliency model to enhance salient regions. The aggregation method is dependent on individual images and considers the performance gaps appropriately. Finally, we also have constructed a new large dataset of challenging social images and pixel-wise saliency annotations to promote further researches and evaluations of visual saliency models. Extensive experiments show that the proposed method performs well on not only the new dataset but also several state-of-the-art saliency datasets.


2020 ◽  
Vol E103.D (4) ◽  
pp. 910-913
Author(s):  
Cheng XU ◽  
Wei HAN ◽  
Dongzhen WANG ◽  
Daqing HUANG

PLoS ONE ◽  
2017 ◽  
Vol 12 (7) ◽  
pp. e0180519 ◽  
Author(s):  
Na Li ◽  
Hui Xu ◽  
Zhenhua Wang ◽  
Lining Sun ◽  
Guodong Chen

Sensors ◽  
2019 ◽  
Vol 19 (2) ◽  
pp. 421 ◽  
Author(s):  
Mian Fareed ◽  
Qi Chun ◽  
Gulnaz Ahmed ◽  
Adil Murtaza ◽  
Muhammad Asif ◽  
...  

Image saliency detection is a very helpful step in many computer vision-based smart systems to reduce the computational complexity by only focusing on the salient parts of the image. Currently, the image saliency is detected through representation-based generative schemes, as these schemes are helpful for extracting the concise representations of the stimuli and to capture the high-level semantics in visual information with a small number of active coefficients. In this paper, we propose a novel framework for salient region detection that uses appearance-based and regression-based schemes. The framework segments the image and forms reconstructive dictionaries from four sides of the image. These side-specific dictionaries are further utilized to obtain the saliency maps of the sides. A unified version of these maps is subsequently employed by a representation-based model to obtain a contrast-based salient region map. The map is used to obtain two regression-based maps with LAB and RGB color features that are unified through the optimization-based method to achieve the final saliency map. Furthermore, the side-specific reconstructive dictionaries are extracted from the boundary and the background pixels, which are enriched with geometrical and visual information. The approach has been thoroughly evaluated on five datasets and compared with the seven most recent approaches. The simulation results reveal that our model performs favorably in comparison with the current saliency detection schemes.


2018 ◽  
Vol 12 (9) ◽  
pp. 1663-1672 ◽  
Author(s):  
Abdul Rahman El Sayed ◽  
Abdallah El Chakik ◽  
Hassan Alabboud ◽  
Adnan Yassine

Sign in / Sign up

Export Citation Format

Share Document