scholarly journals LS-SSDD-v1.0: A Deep Learning Dataset Dedicated to Small Ship Detection from Large-Scale Sentinel-1 SAR Images

2020 ◽  
Vol 12 (18) ◽  
pp. 2997 ◽  
Author(s):  
Tianwen Zhang ◽  
Xiaoling Zhang ◽  
Xiao Ke ◽  
Xu Zhan ◽  
Jun Shi ◽  
...  

Ship detection in synthetic aperture radar (SAR) images is becoming a research hotspot. In recent years, as the rise of artificial intelligence, deep learning has almost dominated SAR ship detection community for its higher accuracy, faster speed, less human intervention, etc. However, today, there is still a lack of a reliable deep learning SAR ship detection dataset that can meet the practical migration application of ship detection in large-scene space-borne SAR images. Thus, to solve this problem, this paper releases a Large-Scale SAR Ship Detection Dataset-v1.0 (LS-SSDD-v1.0) from Sentinel-1, for small ship detection under large-scale backgrounds. LS-SSDD-v1.0 contains 15 large-scale SAR images whose ground truths are correctly labeled by SAR experts by drawing support from the Automatic Identification System (AIS) and Google Earth. To facilitate network training, the large-scale images are directly cut into 9000 sub-images without bells and whistles, providing convenience for subsequent detection result presentation in large-scale SAR images. Notably, LS-SSDD-v1.0 has five advantages: (1) large-scale backgrounds, (2) small ship detection, (3) abundant pure backgrounds, (4) fully automatic detection flow, and (5) numerous and standardized research baselines. Last but not least, combined with the advantage of abundant pure backgrounds, we also propose a Pure Background Hybrid Training mechanism (PBHT-mechanism) to suppress false alarms of land in large-scale SAR images. Experimental results of ablation study can verify the effectiveness of the PBHT-mechanism. LS-SSDD-v1.0 can inspire related scholars to make extensive research into SAR ship detection methods with engineering application value, which is conducive to the progress of SAR intelligent interpretation technology.

2019 ◽  
Vol 11 (24) ◽  
pp. 2997 ◽  
Author(s):  
Clément Dechesne ◽  
Sébastien Lefèvre ◽  
Rodolphe Vadaine ◽  
Guillaume Hajduch ◽  
Ronan Fablet

The monitoring and surveillance of maritime activities are critical issues in both military and civilian fields, including among others fisheries’ monitoring, maritime traffic surveillance, coastal and at-sea safety operations, and tactical situations. In operational contexts, ship detection and identification is traditionally performed by a human observer who identifies all kinds of ships from a visual analysis of remotely sensed images. Such a task is very time consuming and cannot be conducted at a very large scale, while Sentinel-1 SAR data now provide a regular and worldwide coverage. Meanwhile, with the emergence of GPUs, deep learning methods are now established as state-of-the-art solutions for computer vision, replacing human intervention in many contexts. They have been shown to be adapted for ship detection, most often with very high resolution SAR or optical imagery. In this paper, we go one step further and investigate a deep neural network for the joint classification and characterization of ships from SAR Sentinel-1 data. We benefit from the synergies between AIS (Automatic Identification System) and Sentinel-1 data to build significant training datasets. We design a multi-task neural network architecture composed of one joint convolutional network connected to three task specific networks, namely for ship detection, classification, and length estimation. The experimental assessment shows that our network provides promising results, with accurate classification and length performance (classification overall accuracy: 97.25%, mean length error: 4.65 m ± 8.55 m).


2019 ◽  
Vol 11 (9) ◽  
pp. 1078 ◽  
Author(s):  
Ramona Pelich ◽  
Marco Chini ◽  
Renaud Hostache ◽  
Patrick Matgen ◽  
Carlos Lopez-Martinez ◽  
...  

This research addresses the use of dual-polarimetric descriptors for automatic large-scale ship detection and characterization from synthetic aperture radar (SAR) data. Ship detection is usually performed independently on each polarization channel and the detection results are merged subsequently. In this study, we propose to make use of the complex coherence between the two polarization channels of Sentinel-1 and to perform vessel detection in this domain. Therefore, an automatic algorithm, based on the dual-polarization coherence, and applicable to entire large scale SAR scenes in a timely manner, is developed. Automatic identification system (AIS) data are used for an extensive and also large scale cross-comparison with the SAR-based detections. The comparative assessment allows us to evaluate the added-value of the dual-polarization complex coherence, with respect to SAR intensity images in ship detection, as well as the SAR detection performances depending on a vessel’s size. The proposed methodology is justified statistically and tested on Sentinel-1 data acquired over two different and contrasting, in terms of traffic conditions, areas: the English Channel the and Pacific coastline of Mexico. The results indicate a very high SAR detection rate, i.e., >80%, for vessels larger than 60 m and a decrease of detection rate up to 40 % for smaller size vessels. In addition, the analysis highlights many SAR detections without corresponding AIS positions, indicating the complementarity of SAR with respect to cooperative sources for detecting dark vessels.


Author(s):  
G. Matasci ◽  
J. Plante ◽  
K. Kasa ◽  
P. Mousavi ◽  
A. Stewart ◽  
...  

Abstract. We present a deep learning-based vessel detection and (re-)identification approach from spaceborne optical images. We introduce these two components as part of a maritime surveillance from space pipeline and present experimental results on challenging real-world maritime datasets derived from WorldView imagery. First, we developed a vessel detection model based on RetinaNet achieving a performance of 0.795 F1-score on a challenging multi-scale dataset. We then collected a large-scale dataset for vessel identification by applying the detection model on 200+ optical images, detecting the vessels therein and assigning them an identity via an Automatic Identification System association framework. A vessel re-identification model based on Twin neural networks has then been trained on this dataset featuring 2500+ unique vessels with multiple repeated occurrences across different acquisitions. The model allows to naturally establish similarities between vessel images. It returns a relevant ranking of candidate vessels from a database when provided an input image for a specific vessel the user might be interested in, with top-1 and top-10 accuracies of 38.7% and 76.5%, respectively. This study demonstrates the potential offered by the latest advances in deep learning and computer vision when applied to optical remote sensing imagery in a maritime context, opening new opportunities for automated vessel monitoring and tracking capabilities from space.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Zhelin Li ◽  
Lining Zhao ◽  
Xu Han ◽  
Mingyang Pan ◽  
Feng-Jang Hwang

Ship detection is one of the most important research contents of ship intelligent navigation and monitoring. As a supplement to classical navigational equipment such as radar and the Automatic Identification System (AIS), target detection based on computer vision and deep learning has become a new important method. A target detector called YOLOv3 has the advantages of detection speed and accuracy and meets the real-time requirements for ship detection. However, YOLOv3 has a large number of backbone network parameters and requires high hardware performance, which is not conducive to the popularization of applications. On the basis of YOLOv3, this paper proposes a lightweight ship detection model (LSDM) in which the backbone network is improved by using dense connection inspired from DenseNet, and the feature pyramid networks are improved by using spatial separation convolution to replace normal convolution. The two improvements reduce parameters and optimize the network structure greatly. The experimental results show that, with only one-third of parameters of YOLOv3, the LSDM has higher accuracy and speed for ship detection. In addition, the LSDM is simplified further by reducing the number of densely connected units to form a model called LSDM-tiny. The experimental results show that, LSDM-tiny has similar detection speed with YOLOv3-tiny, but has a lot higher accuracy.


2020 ◽  
Vol 12 (9) ◽  
pp. 1443
Author(s):  
Juyoung Song ◽  
Duk-jin Kim ◽  
Ki-mook Kang

Development of convolutional neural network (CNN) optimized for object detection, led to significant developments in ship detection. Although training data critically affect the performance of the CNN-based training model, previous studies focused mostly on enhancing the architecture of the training model. This study developed a sophisticated and automatic methodology to generate verified and robust training data by employing synthetic aperture radar (SAR) images and automatic identification system (AIS) data. The extraction of training data initiated from interpolating the discretely received AIS positions to the exact position of the ship at the time of image acquisition. The interpolation was conducted by applying a Kalman filter, followed by compensating the Doppler frequency shift. The bounding box for the ship was constructed tightly considering the installation of the AIS equipment and the exact size of the ship. From 18 Sentinel-1 SAR images using a completely automated procedure, 7489 training data were obtained, compared with a different set of training data from visual interpretation. The ship detection model trained using the automatic training data obtained 0.7713 of overall detection performance from 3 Sentinel-1 SAR images, which exceeded that of manual training data, evading the artificial structures of harbors and azimuth ambiguity ghost signals from detection.


2021 ◽  
Vol 13 (8) ◽  
pp. 1509
Author(s):  
Xikun Hu ◽  
Yifang Ban ◽  
Andrea Nascetti

Accurate burned area information is needed to assess the impacts of wildfires on people, communities, and natural ecosystems. Various burned area detection methods have been developed using satellite remote sensing measurements with wide coverage and frequent revisits. Our study aims to expound on the capability of deep learning (DL) models for automatically mapping burned areas from uni-temporal multispectral imagery. Specifically, several semantic segmentation network architectures, i.e., U-Net, HRNet, Fast-SCNN, and DeepLabv3+, and machine learning (ML) algorithms were applied to Sentinel-2 imagery and Landsat-8 imagery in three wildfire sites in two different local climate zones. The validation results show that the DL algorithms outperform the ML methods in two of the three cases with the compact burned scars, while ML methods seem to be more suitable for mapping dispersed burn in boreal forests. Using Sentinel-2 images, U-Net and HRNet exhibit comparatively identical performance with higher kappa (around 0.9) in one heterogeneous Mediterranean fire site in Greece; Fast-SCNN performs better than others with kappa over 0.79 in one compact boreal forest fire with various burn severity in Sweden. Furthermore, directly transferring the trained models to corresponding Landsat-8 data, HRNet dominates in the three test sites among DL models and can preserve the high accuracy. The results demonstrated that DL models can make full use of contextual information and capture spatial details in multiple scales from fire-sensitive spectral bands to map burned areas. Using only a post-fire image, the DL methods not only provide automatic, accurate, and bias-free large-scale mapping option with cross-sensor applicability, but also have potential to be used for onboard processing in the next Earth observation satellites.


2021 ◽  
Vol 13 (10) ◽  
pp. 1909
Author(s):  
Jiahuan Jiang ◽  
Xiongjun Fu ◽  
Rui Qin ◽  
Xiaoyan Wang ◽  
Zhifeng Ma

Synthetic Aperture Radar (SAR) has become one of the important technical means of marine monitoring in the field of remote sensing due to its all-day, all-weather advantage. National territorial waters to achieve ship monitoring is conducive to national maritime law enforcement, implementation of maritime traffic control, and maintenance of national maritime security, so ship detection has been a hot spot and focus of research. After the development from traditional detection methods to deep learning combined methods, most of the research always based on the evolving Graphics Processing Unit (GPU) computing power to propose more complex and computationally intensive strategies, while in the process of transplanting optical image detection ignored the low signal-to-noise ratio, low resolution, single-channel and other characteristics brought by the SAR image imaging principle. Constantly pursuing detection accuracy while ignoring the detection speed and the ultimate application of the algorithm, almost all algorithms rely on powerful clustered desktop GPUs, which cannot be implemented on the frontline of marine monitoring to cope with the changing realities. To address these issues, this paper proposes a multi-channel fusion SAR image processing method that makes full use of image information and the network’s ability to extract features; it is also based on the latest You Only Look Once version 4 (YOLO-V4) deep learning framework for modeling architecture and training models. The YOLO-V4-light network was tailored for real-time and implementation, significantly reducing the model size, detection time, number of computational parameters, and memory consumption, and refining the network for three-channel images to compensate for the loss of accuracy due to light-weighting. The test experiments were completed entirely on a portable computer and achieved an Average Precision (AP) of 90.37% on the SAR Ship Detection Dataset (SSDD), simplifying the model while ensuring a lead over most existing methods. The YOLO-V4-lightship detection algorithm proposed in this paper has great practical application in maritime safety monitoring and emergency rescue.


2021 ◽  
Vol 13 (13) ◽  
pp. 2558
Author(s):  
Lei Yu ◽  
Haoyu Wu ◽  
Zhi Zhong ◽  
Liying Zheng ◽  
Qiuyue Deng ◽  
...  

Synthetic aperture radar (SAR) is an active earth observation system with a certain surface penetration capability and can be employed to observations all-day and all-weather. Ship detection using SAR is of great significance to maritime safety and port management. With the wide application of in-depth learning in ordinary images and good results, an increasing number of detection algorithms began entering the field of remote sensing images. SAR image has the characteristics of small targets, high noise, and sparse targets. Two-stage detection methods, such as faster regions with convolution neural network (Faster RCNN), have good results when applied to ship target detection based on the SAR graph, but their efficiency is low and their structure requires many computing resources, so they are not suitable for real-time detection. One-stage target detection methods, such as single shot multibox detector (SSD), make up for the shortage of the two-stage algorithm in speed but lack effective use of information from different layers, so it is not as good as the two-stage algorithm in small target detection. We propose the two-way convolution network (TWC-Net) based on a two-way convolution structure and use multiscale feature mapping to process SAR images. The two-way convolution module can effectively extract the feature from SAR images, and the multiscale mapping module can effectively process shallow and deep feature information. TWC-Net can avoid the loss of small target information during the feature extraction, while guaranteeing good perception of a large target by the deep feature map. We tested the performance of our proposed method using a common SAR ship dataset SSDD. The experimental results show that our proposed method has a higher recall rate and precision, and the F-Measure is 93.32%. It has smaller parameters and memory consumption than other methods and is superior to other methods.


Sign in / Sign up

Export Citation Format

Share Document