Phenomenology-informed techniques for machine learning with measured and synthetic SAR imagery

Author(s):  
Christopher P. Walker ◽  
Kelsie M. Larson ◽  
Ireena A. Erteza ◽  
Brian K. Bray
Keyword(s):  
2021 ◽  
Author(s):  
Jona Raphael ◽  
Ben Eggleston ◽  
Ryan Covington ◽  
Tatianna Evanisko ◽  
Sasha Bylsma ◽  
...  

<p><strong>Operational oil discharges from ships</strong>, also known as “bilge dumping,” have been identified as a major source of petroleum products entering our oceans, cumulatively exceeding the largest oil spills, such as the Exxon Valdez and Deepwater Horizon spills, even when considered over short time spans. However, we still don’t have a good estimate of</p><ul><li>How much oil is being discharged;</li> <li>Where the discharge is happening;</li> <li>Who the responsible vessels are.</li> </ul><p>This makes it difficult to prevent and effectively respond to oil pollution that can damage our marine and coastal environments and economies that depend on them.</p><p> </p><p>In this presentation we will share SkyTruth’s recent work to address these gaps using machine learning tools to detect oil pollution events and identify the responsible vessels when possible. We use a convolutional neural network (CNN) in a ResNet-34 architecture to perform <strong>pixel segmentation</strong> on all incoming <strong>Sentinel-1 synthetic aperture radar</strong> (SAR) imagery to classify slicks. Despite the satellites’ incomplete oceanic coverage, we have been detecting an average of <strong>135 vessel slicks per month</strong>, and have identified several geographic hotspots where oily discharges are occurring regularly. For the images that capture a vessel in the act of discharging oil, we rely on an <strong>Automatic Identification System</strong> (AIS) database to extract details about the ships, including vessel type and flag state. We will share our experience</p><ul><li>Making sufficient training data from inherently sparse satellite image datasets;</li> <li>Building a computer vision model using PyTorch and fastai;</li> <li>Fully automating the process in the Amazon Web Services (AWS) cloud.</li> </ul><p>The application has been running continuously since August 2020, has processed over 380,000 Sentinel-1 images, and has populated a database with more than 1100 high-confidence slicks from vessels. We will be discussing <strong>preliminary results</strong> from this dataset and remaining challenges to be overcome.</p><p> </p><p>Our objective in making this information and the underlying code, models, and training data <strong>freely available to the public</strong> and governments around the world is to enable public pressure campaigns to improve the prevention of and response to pollution events. Learn more at https://skytruth.org/bilge-dumping/</p>


2019 ◽  
Vol 45 (6) ◽  
pp. 723-732 ◽  
Author(s):  
Weizeng Shao ◽  
Yingying Ding ◽  
Jichao Li ◽  
Shuiping Gou ◽  
Ferdinando Nunziata ◽  
...  

2019 ◽  
Vol 124 (9) ◽  
pp. 6658-6672 ◽  
Author(s):  
Mauro M. Barbat ◽  
Thomas Rackow ◽  
Hartmut H. Hellmer ◽  
Christine Wesche ◽  
Mauricio M. Mata

2017 ◽  
Vol 12 (sp) ◽  
pp. 646-655 ◽  
Author(s):  
Yanbing Bai ◽  
Bruno Adriano ◽  
Erick Mas ◽  
Shunichi Koshimura ◽  
◽  
...  

Synthetic Aperture Radar (SAR) remote sensing is a useful tool for mapping earthquake-induced building damage. A series of operational methodologies based on SAR data using either multi-temporal or only post-event SAR images have been developed and used to serve disaster activities. This presents a critical problem: which method is more likely to obtain reliable results and should be adopted for disaster response when both pre- and post-event SAR data are available? To explore this question, this study takes the 2016 Kumamoto earthquake as a case study. ALOS-2/PALSAR-2 SAR images were employed with a machine learning framework to quantitatively compare the performance of building damage mapping using only post-event SAR images and mapping using multi-temporal SAR images. The results show that an overall accuracy of 64.5% was achieved when only post-event SAR images were used, which is 2.3% higher than the overall accuracy when multi-temporal SAR images were used. The estimated building damage ratio for the former and the latter are 29.7% and 31.1%, respectively, which are both close to the building damage ratio obtained from an optical image.


2017 ◽  
Vol 12 (2) ◽  
pp. 259-271 ◽  
Author(s):  
Yanbing Bai ◽  
◽  
Bruno Adriano ◽  
Erick Mas ◽  
Hideomi Gokon ◽  
...  

Earthquake-induced building damage assessment is an indispensable prerequisite for disaster impact assessment, and the increasing availability of high-resolution Synthetic Aperture Radar (SAR) imagery has made it possible to construct damaged building inventories soon after earthquakes strike. However, the shortage of pre-seismic SAR datasets and the lack of available building footprint data pose challenges for rapid building damage assessment. Taking advantage of recent advances in machine learning algorithms, this study proposes an object-based building damage assessment methodology that uses only post-event SAR imagery. A Random Forest machine learning-based object classification, a simplified approach to the extraction of built-up areas, was developed and tested on two ALOS2/PALSAR-2 dual polarimetric SAR images acquired in affected areas soon after the 2015 Nepal earthquake. In addition, a series of texture metrics as well as the random scattering metric and reflection symmetry metric were found to significantly enhance classification accuracy. The feature selection was found to have a positive effect on overall performance. Moreover, the proposed Random Forest framework resulted in overall accuracies of 93% with a kappa coefficient of 0.885 when the object scale of 60 × 60 pixels and 15 features were adopted. A comparative experiment with the k-nearest neighbor framework demonstrated that the Random Forest framework is a significant step toward the achievement of a balanced, two-class classification.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3704 ◽  
Author(s):  
Phuong-Thao Ngo ◽  
Nhat-Duc Hoang ◽  
Biswajeet Pradhan ◽  
Quang Nguyen ◽  
Xuan Tran ◽  
...  

Flash floods are widely recognized as one of the most devastating natural hazards in the world, therefore prediction of flash flood-prone areas is crucial for public safety and emergency management. This research proposes a new methodology for spatial prediction of flash floods based on Sentinel-1 SAR imagery and a new hybrid machine learning technique. The SAR imagery is used to detect flash flood inundation areas, whereas the new machine learning technique, which is a hybrid of the firefly algorithm (FA), Levenberg–Marquardt (LM) backpropagation, and an artificial neural network (named as FA-LM-ANN), was used to construct the prediction model. The Bac Ha Bao Yen (BHBY) area in the northwestern region of Vietnam was used as a case study. Accordingly, a Geographical Information System (GIS) database was constructed using 12 input variables (elevation, slope, aspect, curvature, topographic wetness index, stream power index, toposhade, stream density, rainfall, normalized difference vegetation index, soil type, and lithology) and subsequently the output of flood inundation areas was mapped. Using the database and FA-LM-ANN, the flash flood model was trained and verified. The model performance was validated via various performance metrics including the classification accuracy rate, the area under the curve, precision, and recall. Then, the flash flood model that produced the highest performance was compared with benchmarks, indicating that the combination of FA and LM backpropagation is proven to be very effective and the proposed FA-LM-ANN is a new and useful tool for predicting flash flood susceptibility.


Author(s):  
Majid Mahrooghy ◽  
James V. Aanstoos ◽  
Rodrigo A. A. Nobrega ◽  
Khaled Hasan ◽  
Saurabh Prasad ◽  
...  

2021 ◽  
Vol 172 ◽  
pp. 189-206
Author(s):  
Mauro M. Barbat ◽  
Thomas Rackow ◽  
Christine Wesche ◽  
Hartmut H. Hellmer ◽  
Mauricio M. Mata

2021 ◽  
Vol 13 (7) ◽  
pp. 1401
Author(s):  
Genki Okada ◽  
Luis Moya ◽  
Erick Mas ◽  
Shunichi Koshimura

When flooding occurs, Synthetic Aperture Radar (SAR) imagery is often used to identify flood extent and the affected buildings for two reasons: (i) for early disaster response, such as rescue operations, and (ii) for flood risk analysis. Furthermore, the application of machine learning has been valuable for the identification of damaged buildings. However, the performance of machine learning depends on the number and quality of training data, which is scarce in the aftermath of a large scale disaster. To address this issue, we propose the use of fragmentary but reliable news media photographs at the time of a disaster and use them to detect the whole extent of the flooded buildings. As an experimental test, the flood occurred in the town of Mabi, Japan, in 2018 is used. Five hand-engineered features were extracted from SAR images acquired before and after the disaster. The training data were collected based on news photos. The date release of the photographs were considered to assess the potential role of news information as a source of training data. Then, a discriminant function was calibrated using the training data and the support vector machine method. We found that news information taken within 24 h of a disaster can classify flooded and nonflooded buildings with about 80% accuracy. The results were also compared with a standard unsupervised learning method and confirmed that training data generated from news media photographs improves the accuracy obtained from unsupervised classification methods. We also provide a discussion on the potential role of news media as a source of reliable information to be used as training data and other activities associated to early disaster response.


Sign in / Sign up

Export Citation Format

Share Document