scholarly journals GIS-Based Optimum Geospatial Characterization for Seismic Site Effect Assessment in an Inland Urban Area, South Korea

2020 ◽  
Vol 10 (21) ◽  
pp. 7443
Author(s):  
Han-Saem Kim ◽  
Chang-Guk Sun ◽  
Mingi Kim ◽  
Hyung-Ik Cho ◽  
Moon-Gyo Lee

Soil and rock characteristics are primarily affected by geological, geotechnical, and terrain variation with spatial uncertainty. Earthquake-induced hazards are also strongly influenced by site-specific seismic site effects associated with subsurface strata and soil stiffness. For reliable mapping of soil and seismic zonation, qualification and normalization of spatial uncertainties is required; this can be achieved by interactive refinement of a geospatial database with remote sensing-based and geotechnical information. In this study, geotechnical spatial information and zonation were developed while verifying database integrity, spatial clustering, optimization of geospatial interpolation, and mapping site response characteristics. This framework was applied to Daejeon, South Korea, to consider spatially biased terrain, geological, and geotechnical properties in an inland urban area. For developing the spatially best-matched geometry with remote sensing data at high spatial resolution, the hybrid model blended with two outlier detection methods was proposed and applied for geotechnical datasets. A multiscale grid subdivided by hot spot-based clusters was generated using the optimized geospatial interpolation model. A principal component analysis-based unified zonation map identified vulnerable districts in the central old downtown area based on the integration of the optimized geoprocessing framework. Performance of the geospatial mapping and seismic zonation was discussed with digital elevation model, geological map.

2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Sunmin Lee ◽  
Sung-Hwan Park ◽  
Moung-Jin Lee ◽  
Taejung Song

The social and economic harm to North Korea caused by water-related disasters is increasing with the increase in the disasters worldwide. Despite the improvement of inter-Korean relations in recent years, the issue of water-related disasters, which can directly affect the lives of people, has not been discussed. With consideration of inter-Korean relations, a government-wide technical plan should be established to reduce the damage caused by water-related disasters. Therefore, the purpose of this study was to identify remote sensing and GIS techniques that could be useful in reducing the damage caused by water-related disasters while considering inter-Korean relations and the disasters that occur in North Korea. To this end, based on the definitions of disasters in South and North Korea, water-related disasters that occurred during a 17-year period from 2001 to 2017 in North Korea were first summarized and reclassified into six types: typhoons, downpours, floods, landslides, heavy snowfalls, and droughts. In addition, remote sensing- and GIS-based techniques in South Korea that could be applied to water-related disasters in North Korea were investigated and reclassified according to applicability to the six disaster types. The results showed that remote sensing and other monitoring techniques using spatial information, GIS-based database construction, and integrated water-related disaster management have high priorities. Especially, the use of radar images, such as C band images, has proven essential. Moreover, case studies were analyzed within remote sensing- and GIS-based techniques that could be applicable to the water-related disasters that occur frequently in North Korea. Water disaster satellites with high-resolution C band synthetic aperture radar are scheduled to be launched by South Korea. These results provide basic data to support techniques and establish countermeasures to reduce the damage from water-related disasters in North Korea in the medium to long term.


2021 ◽  
Vol 4 (17) ◽  
pp. 83-94
Author(s):  
Ricky Anak Kemarau ◽  
Oliver Valentine Eboy

The years 1997/1998 and 2015/2016 saw the occurrence of El Niño occur among the worst in human history. Until now there is still a lack of research in studying the degree of El Niño's strength impact on climate and weather, especially in the tropic region. The objective of this study is to study the effectiveness of remote sensing technology in identifying the differences between the 1997/1998 and 2015/2016 El Niño events. This study uses six satellite data and temperature data from the Malaysia Meteorology Department (MMD). The first step of remote sensing data will be through pre-processing, converting digital Numbers (DN) to Land Surface Temperature (LST). The results of the study found that there was a change in the pattern of LST columns during the 1997/1998 and 2015/2016 El Niño events. Spatial patterns change based on Oceanic Niño Index (ONI) values. The results of this study are important because of the importance of spatial information to those responsible for preparing measures to overcome and reduce the impact of El Niño on the population. at the developing country level, including Malaysia, there is still a lack of information technology infrastructure in channeling useful information to the community. Through the information, this spatial information provides critical hot spot information that needs more attention.


2004 ◽  
Vol 12 (2) ◽  
Author(s):  
Sugiharto Budi Santoso

Land order in an urban area that is not based on complete and reasonable spatial information an cause an unintegrated development program. Therefore, spatial information that can analyze the information to make a decision of land order is greatly needed. To present the most reasonable physical data of the urban can use the data of remote sensing as a main source, because the data an present not only a high temporal resolution, but also a complete object. Along with the advance of computer-based GIS, the data of remote sensing can be integrated with GIS. In addition, the data sharing can be used in various sectors. Thus, both updating and mutual exchanging of data can be done easly.


2021 ◽  
Vol 13 (22) ◽  
pp. 4533
Author(s):  
Kai Hu ◽  
Dongsheng Zhang ◽  
Min Xia

Cloud detection is a key step in the preprocessing of optical satellite remote sensing images. In the existing literature, cloud detection methods are roughly divided into threshold methods and deep-learning methods. Most of the traditional threshold methods are based on the spectral characteristics of clouds, so it is easy to lose the spatial location information in the high-reflection area, resulting in misclassification. Besides, due to the lack of generalization, the traditional deep-learning network also easily loses the details and spatial information if it is directly applied to cloud detection. In order to solve these problems, we propose a deep-learning model, Cloud Detection UNet (CDUNet), for cloud detection. The characteristics of the network are that it can refine the division boundary of the cloud layer and capture its spatial position information. In the proposed model, we introduced a High-frequency Feature Extractor (HFE) and a Multiscale Convolution (MSC) to refine the cloud boundary and predict fragmented clouds. Moreover, in order to improve the accuracy of thin cloud detection, the Spatial Prior Self-Attention (SPSA) mechanism was introduced to establish the cloud spatial position information. Additionally, a dual-attention mechanism is proposed to reduce the proportion of redundant information in the model and improve the overall performance of the model. The experimental results showed that our model can cope with complex cloud cover scenes and has excellent performance on cloud datasets and SPARCS datasets. Its segmentation accuracy is better than the existing methods, which is of great significance for cloud-detection-related work.


2015 ◽  
Vol 2015 ◽  
pp. 1-11
Author(s):  
Pengwei Li ◽  
Wenying Ge

Shadows limit many remote sensing applications such as classification, target detection, and change detection. Most current shadow detection methods utilize the histogram threshold of spectral characteristics to distinguish the shadows and nonshadows directly, called “hard binary shadow.” Obviously, the performance of threshold-based methods heavily rely on the selected threshold. Simultaneously, these threshold-based methods do not take any spatial information into account. To overcome these shortcomings, a soft shadow description method is developed by introducing the concept of opacity into shadow detection, and MRF-based shadow detection method is proposed in order to make use of neighborhood information. Experiments on remote sensing images have shown that the proposed method can obtain more accurate detection results.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Rajat Garg ◽  
Anil Kumar ◽  
Nikunj Bansal ◽  
Manish Prateek ◽  
Shashi Kumar

AbstractUrban area mapping is an important application of remote sensing which aims at both estimation and change in land cover under the urban area. A major challenge being faced while analyzing Synthetic Aperture Radar (SAR) based remote sensing data is that there is a lot of similarity between highly vegetated urban areas and oriented urban targets with that of actual vegetation. This similarity between some urban areas and vegetation leads to misclassification of the urban area into forest cover. The present work is a precursor study for the dual-frequency L and S-band NASA-ISRO Synthetic Aperture Radar (NISAR) mission and aims at minimizing the misclassification of such highly vegetated and oriented urban targets into vegetation class with the help of deep learning. In this study, three machine learning algorithms Random Forest (RF), K-Nearest Neighbour (KNN), and Support Vector Machine (SVM) have been implemented along with a deep learning model DeepLabv3+ for semantic segmentation of Polarimetric SAR (PolSAR) data. It is a general perception that a large dataset is required for the successful implementation of any deep learning model but in the field of SAR based remote sensing, a major issue is the unavailability of a large benchmark labeled dataset for the implementation of deep learning algorithms from scratch. In current work, it has been shown that a pre-trained deep learning model DeepLabv3+ outperforms the machine learning algorithms for land use and land cover (LULC) classification task even with a small dataset using transfer learning. The highest pixel accuracy of 87.78% and overall pixel accuracy of 85.65% have been achieved with DeepLabv3+ and Random Forest performs best among the machine learning algorithms with overall pixel accuracy of 77.91% while SVM and KNN trail with an overall accuracy of 77.01% and 76.47% respectively. The highest precision of 0.9228 is recorded for the urban class for semantic segmentation task with DeepLabv3+ while machine learning algorithms SVM and RF gave comparable results with a precision of 0.8977 and 0.8958 respectively.


2021 ◽  
Vol 13 (10) ◽  
pp. 1909
Author(s):  
Jiahuan Jiang ◽  
Xiongjun Fu ◽  
Rui Qin ◽  
Xiaoyan Wang ◽  
Zhifeng Ma

Synthetic Aperture Radar (SAR) has become one of the important technical means of marine monitoring in the field of remote sensing due to its all-day, all-weather advantage. National territorial waters to achieve ship monitoring is conducive to national maritime law enforcement, implementation of maritime traffic control, and maintenance of national maritime security, so ship detection has been a hot spot and focus of research. After the development from traditional detection methods to deep learning combined methods, most of the research always based on the evolving Graphics Processing Unit (GPU) computing power to propose more complex and computationally intensive strategies, while in the process of transplanting optical image detection ignored the low signal-to-noise ratio, low resolution, single-channel and other characteristics brought by the SAR image imaging principle. Constantly pursuing detection accuracy while ignoring the detection speed and the ultimate application of the algorithm, almost all algorithms rely on powerful clustered desktop GPUs, which cannot be implemented on the frontline of marine monitoring to cope with the changing realities. To address these issues, this paper proposes a multi-channel fusion SAR image processing method that makes full use of image information and the network’s ability to extract features; it is also based on the latest You Only Look Once version 4 (YOLO-V4) deep learning framework for modeling architecture and training models. The YOLO-V4-light network was tailored for real-time and implementation, significantly reducing the model size, detection time, number of computational parameters, and memory consumption, and refining the network for three-channel images to compensate for the loss of accuracy due to light-weighting. The test experiments were completed entirely on a portable computer and achieved an Average Precision (AP) of 90.37% on the SAR Ship Detection Dataset (SSDD), simplifying the model while ensuring a lead over most existing methods. The YOLO-V4-lightship detection algorithm proposed in this paper has great practical application in maritime safety monitoring and emergency rescue.


2020 ◽  
Vol 13 (1) ◽  
pp. 71
Author(s):  
Zhiyong Xu ◽  
Weicun Zhang ◽  
Tianxiang Zhang ◽  
Jiangyun Li

Semantic segmentation is a significant method in remote sensing image (RSIs) processing and has been widely used in various applications. Conventional convolutional neural network (CNN)-based semantic segmentation methods are likely to lose the spatial information in the feature extraction stage and usually pay little attention to global context information. Moreover, the imbalance of category scale and uncertain boundary information meanwhile exists in RSIs, which also brings a challenging problem to the semantic segmentation task. To overcome these problems, a high-resolution context extraction network (HRCNet) based on a high-resolution network (HRNet) is proposed in this paper. In this approach, the HRNet structure is adopted to keep the spatial information. Moreover, the light-weight dual attention (LDA) module is designed to obtain global context information in the feature extraction stage and the feature enhancement feature pyramid (FEFP) structure is promoted and employed to fuse the contextual information of different scales. In addition, to achieve the boundary information, we design the boundary aware (BA) module combined with the boundary aware loss (BAloss) function. The experimental results evaluated on Potsdam and Vaihingen datasets show that the proposed approach can significantly improve the boundary and segmentation performance up to 92.0% and 92.3% on overall accuracy scores, respectively. As a consequence, it is envisaged that the proposed HRCNet model will be an advantage in remote sensing images segmentation.


Forests ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 692
Author(s):  
MD Abdul Mueed Choudhury ◽  
Ernesto Marcheggiani ◽  
Andrea Galli ◽  
Giuseppe Modica ◽  
Ben Somers

Currently, the worsening impacts of urbanizations have been impelled to the importance of monitoring and management of existing urban trees, securing sustainable use of the available green spaces. Urban tree species identification and evaluation of their roles in atmospheric Carbon Stock (CS) are still among the prime concerns for city planners regarding initiating a convenient and easily adaptive urban green planning and management system. A detailed methodology on the urban tree carbon stock calibration and mapping was conducted in the urban area of Brussels, Belgium. A comparative analysis of the mapping outcomes was assessed to define the convenience and efficiency of two different remote sensing data sources, Light Detection and Ranging (LiDAR) and WorldView-3 (WV-3), in a unique urban area. The mapping results were validated against field estimated carbon stocks. At the initial stage, dominant tree species were identified and classified using the high-resolution WorldView3 image, leading to the final carbon stock mapping based on the dominant species. An object-based image analysis approach was employed to attain an overall accuracy (OA) of 71% during the classification of the dominant species. The field estimations of carbon stock for each plot were done utilizing an allometric model based on the field tree dendrometric data. Later based on the correlation among the field data and the variables (i.e., Normalized Difference Vegetation Index, NDVI and Crown Height Model, CHM) extracted from the available remote sensing data, the carbon stock mapping and validation had been done in a GIS environment. The calibrated NDVI and CHM had been used to compute possible carbon stock in either case of the WV-3 image and LiDAR data, respectively. A comparative discussion has been introduced to bring out the issues, especially for the developing countries, where WV-3 data could be a better solution over the hardly available LiDAR data. This study could assist city planners in understanding and deciding the applicability of remote sensing data sources based on their availability and the level of expediency, ensuring a sustainable urban green management system.


Sign in / Sign up

Export Citation Format

Share Document