scholarly journals Shoreline Detection using Optical Remote Sensing: A Review

2019 ◽  
Vol 8 (2) ◽  
pp. 75 ◽  
Author(s):  
Seynabou Toure ◽  
Oumar Diop ◽  
Kidiyo Kpalma ◽  
Amadou Maiga

With coastal erosion and the increased interest in beach monitoring, there is a greater need for evaluation of the shoreline detection methods. Some studies have been conducted to produce state of the art reviews on shoreline definition and detection. It should be noted that with the development of remote sensing, shoreline detection is mainly achieved by image processing. Thus, it is important to evaluate the different image processing approaches used for shoreline detection. This paper presents a state of the art review on image processing methods used for shoreline detection in remote sensing. It starts with a review of different key concepts that can be used for shoreline detection. Then, the applied fundamental image processing methods are shown before a comparative analysis of these methods. A significant outcome of this study will provide practical insights into shoreline detection.

Author(s):  
Man Sing Wong ◽  
Xiaolin Zhu ◽  
Sawaid Abbas ◽  
Coco Yin Tung Kwok ◽  
Meilian Wang

AbstractApplications of Earth-observational remote sensing are rapidly increasing over urban areas. The latest regime shift from conventional urban development to smart-city development has triggered a rise in smart innovative technologies to complement spatial and temporal information in new urban design models. Remote sensing-based Earth-observations provide critical information to close the gaps between real and virtual models of urban developments. Remote sensing, itself, has rapidly evolved since the launch of the first Earth-observation satellite, Landsat, in 1972. Technological advancements over the years have gradually improved the ground resolution of satellite images, from 80 m in the 1970s to 0.3 m in the 2020s. Apart from the ground resolution, improvements have been made in many other aspects of satellite remote sensing. Also, the method and techniques of information extraction have advanced. However, to understand the latest developments and scope of information extraction, it is important to understand background information and major techniques of image processing. This chapter briefly describes the history of optical remote sensing, the basic operation of satellite image processing, advanced methods of object extraction for modern urban designs, various applications of remote sensing in urban or peri-urban settings, and future satellite missions and directions of urban remote sensing.


2019 ◽  
Vol 11 (20) ◽  
pp. 2389 ◽  
Author(s):  
Deodato Tapete ◽  
Francesca Cigna

Illegal excavations in archaeological heritage sites (namely “looting”) are a global phenomenon. Satellite images are nowadays massively used by archaeologists to systematically document sites affected by looting. In parallel, remote sensing scientists are increasingly developing processing methods with a certain degree of automation to quantify looting using satellite imagery. To capture the state-of-the-art of this growing field of remote sensing, in this work 47 peer-reviewed research publications and grey literature are reviewed, accounting for: (i) the type of satellite data used, i.e., optical and synthetic aperture radar (SAR); (ii) properties of looting features utilized as proxies for damage assessment (e.g., shape, morphology, spectral signature); (iii) image processing workflows; and (iv) rationale for validation. Several scholars studied looting even prior to the conflicts recently affecting the Middle East and North Africa (MENA) region. Regardless of the method used for looting feature identification (either visual/manual, or with the aid of image processing), they preferred very high resolution (VHR) optical imagery, mainly black-and-white panchromatic, or pansharpened multispectral, whereas SAR is being used more recently by specialist image analysts only. Yet the full potential of VHR and high resolution (HR) multispectral information in optical imagery is to be exploited, with limited research studies testing spectral indices. To fill this gap, a range of looted sites across the MENA region are presented in this work, i.e., Lisht, Dashur, and Abusir el Malik (Egypt), and Tell Qarqur, Tell Jifar, Sergiopolis, Apamea, Dura Europos, and Tell Hizareen (Syria). The aim is to highlight: (i) the complementarity of HR multispectral data and VHR SAR with VHR optical imagery, (ii) usefulness of spectral profiles in the visible and near-infrared bands, and (iii) applicability of methods for multi-temporal change detection. Satellite data used for the demonstration include: HR multispectral imagery from the Copernicus Sentinel-2 constellation, VHR X-band SAR data from the COSMO-SkyMed mission, VHR panchromatic and multispectral WorldView-2 imagery, and further VHR optical data acquired by GeoEye-1, IKONOS-2, QuickBird-2, and WorldView-3, available through Google Earth. Commonalities between the different image processing methods are examined, alongside a critical discussion about automation in looting assessment, current lack of common practices in image processing, achievements in managing the uncertainty in looting feature interpretation, and current needs for more dissemination and user uptake. Directions toward sharing and harmonization of methodologies are outlined, and some proposals are made with regard to the aspects that the community working with satellite images should consider, in order to define best practices of satellite-based looting assessment.


2020 ◽  
Vol 4 (2) ◽  
pp. 345-351
Author(s):  
Wicaksono Yuli Sulistyo ◽  
Imam Riadi ◽  
Anton Yudhana

Identification of object boundaries in a digital image is developing rapidly in line with advances in computer technology for image processing. Edge detection becomes important because humans in recognizing the object of an image will pay attention to the edges contained in the image. Edge detection of an image is done because the edge of the object in the image contains very important information, the information obtained can be either size or shape. The edge detection method used in this study is Sobel operator, Prewitt operator, Laplace operator, Laplacian of Gaussian (LoG) operator and Kirsch operator which are compared and analyzed in the five methods. The results of the comparison show that the clear margins are the Sobel, Prewitt and Kirsch operators, with PSNR calculations that produce values ​​above 30 dB. Laplace and LoG operators only have an average PSNR value below 30 dB. Other quality comparisons use the histogram value and the contrast value with the highest value results in the Laplace and LoG operators with an average histogram value of 110 and a contrast value of 24. The lowest histogram and contrast value are owned by the Sobel and Prewitt operators.  


2019 ◽  
Vol 11 (18) ◽  
pp. 2173 ◽  
Author(s):  
Jinlei Ma ◽  
Zhiqiang Zhou ◽  
Bo Wang ◽  
Hua Zong ◽  
Fei Wu

To accurately detect ships of arbitrary orientation in optical remote sensing images, we propose a two-stage CNN-based ship-detection method based on the ship center and orientation prediction. Center region prediction network and ship orientation classification network are constructed to generate rotated region proposals, and then we can predict rotated bounding boxes from rotated region proposals to locate arbitrary-oriented ships more accurately. The two networks share the same deconvolutional layers to perform semantic segmentation for the prediction of center regions and orientations of ships, respectively. They can provide the potential center points of the ships helping to determine the more confident locations of the region proposals, as well as the ship orientation information, which is beneficial to the more reliable predetermination of rotated region proposals. Classification and regression are then performed for the final ship localization. Compared with other typical object detection methods for natural images and ship-detection methods, our method can more accurately detect multiple ships in the high-resolution remote sensing image, irrespective of the ship orientations and a situation in which the ships are docked very closely. Experiments have demonstrated the promising improvement of ship-detection performance.


Author(s):  
Y. Zheng ◽  
M. Guo ◽  
Q. Dai ◽  
L. Wang

The GaoFen-2 satellite (GF-2) is a self-developed civil optical remote sensing satellite of China, which is also the first satellite with the resolution of being superior to 1 meter in China. In this paper, we propose a pan-sharpening method based on guided image filtering, apply it to the GF-2 images and compare the performance to state-of-the-art methods. Firstly, a simulated low-resolution panchromatic band is yielded; thereafter, the resampled multispectral image is taken as the guidance image to filter the simulated low resolution panchromatic Pan image, and extracting the spatial information from the original Pan image; finally, the pan-sharpened result is synthesized by injecting the spatial details into each band of the resampled MS image according to proper weights. Three groups of GF-2 images acquired from water body, urban and cropland areas have been selected for assessments. Four evaluation metrics are employed for quantitative assessment. The experimental results show that, for GF-2 imagery acquired over different scenes, the proposed method can not only achieve high spectral fidelity, but also enhance the spatial details


Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5284 ◽  
Author(s):  
Heng Zhang ◽  
Jiayu Wu ◽  
Yanli Liu ◽  
Jia Yu

In recent years, the research on optical remote sensing images has received greater and greater attention. Object detection, as one of the most challenging tasks in the area of remote sensing, has been remarkably promoted by convolutional neural network (CNN)-based methods like You Only Look Once (YOLO) and Faster R-CNN. However, due to the complexity of backgrounds and the distinctive object distribution, directly applying these general object detection methods to the remote sensing object detection usually renders poor performance. To tackle this problem, a highly efficient and robust framework based on YOLO is proposed. We devise and integrate VaryBlock to the architecture which effectively offsets some of the information loss caused by downsampling. In addition, some techniques are utilized to facilitate the performance and to avoid overfitting. Experimental results show that our proposed method can enormously improve the mean average precision by a large margin on the NWPU VHR-10 dataset.


2021 ◽  
Vol 13 (6) ◽  
pp. 1132
Author(s):  
Zhibao Wang ◽  
Lu Bai ◽  
Guangfu Song ◽  
Jie Zhang ◽  
Jinhua Tao ◽  
...  

Estimation of the number and geo-location of oil wells is important for policy holders considering their impact on energy resource planning. With the recent development in optical remote sensing, it is possible to identify oil wells from satellite images. Moreover, the recent advancement in deep learning frameworks for object detection in remote sensing makes it possible to automatically detect oil wells from remote sensing images. In this paper, we collected a dataset named Northeast Petroleum University–Oil Well Object Detection Version 1.0 (NEPU–OWOD V1.0) based on high-resolution remote sensing images from Google Earth Imagery. Our database includes 1192 oil wells in 432 images from Daqing City, which has the largest oilfield in China. In this study, we compared nine different state-of-the-art deep learning models based on algorithms for object detection from optical remote sensing images. Experimental results show that the state-of-the-art deep learning models achieve high precision on our collected dataset, which demonstrate the great potential for oil well detection in remote sensing.


2021 ◽  
Vol 13 (5) ◽  
pp. 847
Author(s):  
Wei Huang ◽  
Guanyi Li ◽  
Qiqiang Chen ◽  
Ming Ju ◽  
Jiantao Qu

In the wake of developments in remote sensing, the application of target detection of remote sensing is of increasing interest. Unfortunately, unlike natural image processing, remote sensing image processing involves dealing with large variations in object size, which poses a great challenge to researchers. Although traditional multi-scale detection networks have been successful in solving problems with such large variations, they still have certain limitations: (1) The traditional multi-scale detection methods note the scale of features but ignore the correlation between feature levels. Each feature map is represented by a single layer of the backbone network, and the extracted features are not comprehensive enough. For example, the SSD network uses the features extracted from the backbone network at different scales directly for detection, resulting in the loss of a large amount of contextual information. (2) These methods combine with inherent backbone classification networks to perform detection tasks. RetinaNet is just a combination of the ResNet-101 classification network and FPN network to perform the detection tasks; however, there are differences in object classification and detection tasks. To address these issues, a cross-scale feature fusion pyramid network (CF2PN) is proposed. First and foremost, a cross-scale fusion module (CSFM) is introduced to extract sufficiently comprehensive semantic information from features for performing multi-scale fusion. Moreover, a feature pyramid for target detection utilizing thinning U-shaped modules (TUMs) performs the multi-level fusion of the features. Eventually, a focal loss in the prediction section is used to control the large number of negative samples generated during the feature fusion process. The new architecture of the network proposed in this paper is verified by DIOR and RSOD dataset. The experimental results show that the performance of this method is improved by 2–12% in the DIOR dataset and RSOD dataset compared with the current SOTA target detection methods.


Sign in / Sign up

Export Citation Format

Share Document