scholarly journals Ship Detection in Gaofen-3 SAR Images Based on Sea Clutter Distribution Analysis and Deep Convolutional Neural Network

Sensors ◽  
2018 ◽  
Vol 18 (2) ◽  
pp. 334 ◽  
Author(s):  
Quanzhi An ◽  
Zongxu Pan ◽  
Hongjian You
2019 ◽  
Vol 11 (10) ◽  
pp. 1206 ◽  
Author(s):  
Tianwen Zhang ◽  
Xiaoling Zhang

As an active microwave sensor, synthetic aperture radar (SAR) has the characteristic of all-day and all-weather earth observation, which has become one of the most important means for high-resolution earth observation and global resource management. Ship detection in SAR images is also playing an increasingly important role in ocean observation and disaster relief. Nowadays, both traditional feature extraction methods and deep learning (DL) methods almost focus on improving ship detection accuracy, and the detection speed is neglected. However, the speed of SAR ship detection is extraordinarily significant, especially in real-time maritime rescue and emergency military decision-making. In order to solve this problem, this paper proposes a novel approach for high-speed ship detection in SAR images based on a grid convolutional neural network (G-CNN). This method improves the detection speed by meshing the input image, inspired by the basic thought of you only look once (YOLO), and using depthwise separable convolution. G-CNN is a brand new network structure proposed by us and it is mainly composed of a backbone convolutional neural network (B-CNN) and a detection convolutional neural network (D-CNN). First, SAR images to be detected are divided into grid cells and each grid cell is responsible for detection of specific ships. Then, the whole image is input into B-CNN to extract features. Finally, ship detection is completed in D-CNN under three scales. We experimented on an open SAR Ship Detection Dataset (SSDD) used by many other scholars and then validated the migration ability of G-CNN on two SAR images from RadarSat-1 and Gaofen-3. The experimental results show that the detection speed of our proposed method is faster than the existing other methods, such as faster-regions convolutional neural network (Faster R-CNN), single shot multi-box detector (SSD), and YOLO, under the same hardware environment with NVIDIA GTX1080 graphics processing unit (GPU) and the detection accuracy is kept within an acceptable range. Our proposed G-CNN ship detection system has great application values in real-time maritime disaster rescue and emergency military strategy formulation.


2020 ◽  
Author(s):  
Mikael Strauhs ◽  
Antonio Rafael Paulino de Lira ◽  
Ramiro Fernandes Ramos ◽  
João Vitor Marques de Oliveira Moita ◽  
Severino Virgínio Martins Neto ◽  
...  

2019 ◽  
Vol 11 (17) ◽  
pp. 1965 ◽  
Author(s):  
Yanan You ◽  
Zezhong Li ◽  
Bohao Ran ◽  
Jingyi Cao ◽  
Sudi Lv ◽  
...  

High-resolution optical remote sensing data can be utilized to investigate the human behavior and the activities of artificial targets, for example ship detection on the sea. Recently, the deep convolutional neural network (DCNN) in the field of deep learning is widely used in image processing, especially in target detection tasks. Therefore, a complete processing system called the broad area target search (BATS) is proposed based on DCNN in this paper, which contains data import, processing and storage steps. In this system, aiming at the problem of onshore false alarms, a method named as Mask-Faster R-CNN is proposed to differentiate the target and non-target areas by introducing a semantic segmentation sub network into the Faster R-CNN. In addition, we propose a DCNN framework named as Saliency-Faster R-CNN to deal with the problem of multi-scale ships detection, which solves the problem of missing detection caused by the inconsistency between large-scale targets and training samples. Based on these DCNN-based methods, the BATS system is tested to verify that our system can integrate different ship detection methods to effectively solve the problems that existed in the ship detection task. Furthermore, our system provides an interface for users, as a data-driven learning, to optimize the DCNN-based methods.


IEEE Access ◽  
2018 ◽  
Vol 6 ◽  
pp. 50693-50708 ◽  
Author(s):  
Juanping Zhao ◽  
Zenghui Zhang ◽  
Wenxian Yu ◽  
Trieu-Kien Truong

2020 ◽  
Vol 2020 (4) ◽  
pp. 4-14
Author(s):  
Vladimir Budak ◽  
Ekaterina Ilyina

The article proposes the classification of lenses with different symmetrical beam angles and offers a scale as a spot-light’s palette. A collection of spotlight’s images was created and classified according to the proposed scale. The analysis of 788 pcs of existing lenses and reflectors with different LEDs and COBs carried out, and the dependence of the axial light intensity from beam angle was obtained. A transfer training of new deep convolutional neural network (CNN) based on the pre-trained GoogleNet was performed using this collection. GradCAM analysis showed that the trained network correctly identifies the features of objects. This work allows us to classify arbitrary spotlights with an accuracy of about 80 %. Thus, light designer can determine the class of spotlight and corresponding type of lens with its technical parameters using this new model based on CCN.


Sign in / Sign up

Export Citation Format

Share Document