Background modeling by exploring multi-scale fusion of texture and intensity in complex scenes

Author(s):  
Zhong Zhang ◽  
Baihua Xiao ◽  
Chunheng Wang ◽  
Wen Zhou ◽  
Shuang Liu
2020 ◽  
Vol 309 ◽  
pp. 03028
Author(s):  
Liyuan Chen ◽  
Zhonglong Zheng ◽  
Pengcheng Bian ◽  
Jiashuaizi Mo ◽  
Abd Erraouf Khodja

With the development of deep learning, researches in the field of computer vision are attracting more attention. As the pre-processing operation of visual tasks, a salient model may focus on pure architectures. The paper proposes a new multi-scale fusion network to enrich high-level redundant information with the enlarged receptive field. With the guidance of attention mechanism, the framework can capture more effective correlation spatial and channels information. Building a short-connection between high-level and each level features to transmit the contextual features. The model can be used in a variety of complex scenes for end-to- end image detection, with simple structure and strong versatility. Experimental results obtained on multiple common datasets have shown that the proposed model achieved better performance both in the visual effect and the accuracy for small object and multi-target detection.


2014 ◽  
Vol 74 (11) ◽  
pp. 3947-3966 ◽  
Author(s):  
Li Sun ◽  
Wenjuan Sheng ◽  
Yiqing Liu

Author(s):  
Chenqiu Zhao ◽  
Tingting Zhang ◽  
Qianying Huang ◽  
Xiaohong Zhang ◽  
Dan Yang ◽  
...  

2018 ◽  
Vol 27 (3) ◽  
pp. 1112-1125 ◽  
Author(s):  
Dan Yang ◽  
Chenqiu Zhao ◽  
Xiaohong Zhang ◽  
Sheng Huang

Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5575
Author(s):  
Chunyuan Wang ◽  
Yang Wu ◽  
Yihan Wang ◽  
Yiping Chen

With the improvement of the quality and resolution of remote sensing (RS) images, scene recognition tasks have played an important role in the RS community. However, due to the special bird’s eye view image acquisition mode of imaging sensors, it is still challenging to construct a discriminate representation of diverse and complex scenes to improve RS image recognition performance. Capsule networks that can learn the spatial relationship between the features in an image has a good image classification performance. However, the original capsule network is not suitable for images with a complex background. To address the above issues, this paper proposes a novel end-to-end capsule network termed DS-CapsNet, in which a new multi-scale feature enhancement module and a new Caps-SoftPool method are advanced by aggregating the advantageous attributes of the residual convolution architecture, Diverse Branch Block (DBB), Squeeze and Excitation (SE) block, and the Caps-SoftPool method. By using the residual DBB, multiscale features can be extracted and fused to recover a semantic strong feature representation. By adopting SE, the informative features are emphasized, and the less salient features are weakened. The new Caps-SoftPool method can reduce the number of parameters that are needed in order to prevent an over-fitting problem. The novel DS-CapsNet achieves a competitive and promising performance for RS image recognition by using high-quality and robust capsule representation. The extensive experiments on two challenging datasets, AID and NWPU-RESISC45, demonstrate the robustness and superiority of the proposed DS-CapsNet in scene recognition tasks.


2013 ◽  
Vol 385-386 ◽  
pp. 1439-1442
Author(s):  
Zhong Hai Li ◽  
Peng Bo Yu ◽  
Qing Cheng Zhang

The existing background modelings are mostly color vision characteristics modeling based on single pixel, which are easily influenced by light shadow, weather and noise, and can easily cause foreground apertures and false alarm discrete noise. This paper presents the background modeling based on multiscale Gauss parameters against deficiencies. the experimental results shows that it can efficiently solve the problem of cavity and false alarm discrete noise.


Sign in / Sign up

Export Citation Format

Share Document