High spatial resolution and long-distance BOTDA using differential Brillouin gain in a dispersion shifted fiber

Author(s):  
Yongkang Dong ◽  
Xiaoyi Bao
Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7241
Author(s):  
Dengji Zhou ◽  
Guizhou Wang ◽  
Guojin He ◽  
Tengfei Long ◽  
Ranyu Yin ◽  
...  

Building extraction from high spatial resolution remote sensing images is a hot spot in the field of remote sensing applications and computer vision. This paper presents a semantic segmentation model, which is a supervised method, named Pyramid Self-Attention Network (PISANet). Its structure is simple, because it contains only two parts: one is the backbone of the network, which is used to learn the local features (short distance context information around the pixel) of buildings from the image; the other part is the pyramid self-attention module, which is used to obtain the global features (long distance context information with other pixels in the image) and the comprehensive features (includes color, texture, geometric and high-level semantic feature) of the building. The network is an end-to-end approach. In the training stage, the input is the remote sensing image and corresponding label, and the output is probability map (the probability that each pixel is or is not building). In the prediction stage, the input is the remote sensing image, and the output is the extraction result of the building. The complexity of the network structure was reduced so that it is easy to implement. The proposed PISANet was tested on two datasets. The result shows that the overall accuracy reached 94.50 and 96.15%, the intersection-over-union reached 77.45 and 87.97%, and F1 index reached 87.27 and 93.55%, respectively. In experiments on different datasets, PISANet obtained high overall accuracy, low error rate and improved integrity of individual buildings.


2010 ◽  
Vol 18 (8) ◽  
pp. 8671 ◽  
Author(s):  
Tom Sperber ◽  
Avishay Eyal ◽  
Moshe Tur ◽  
Luc Thévenaz

2017 ◽  
Vol 25 (6) ◽  
pp. 6997 ◽  
Author(s):  
Xin-Hong Jia ◽  
Han-Qing Chang ◽  
Kai Lin ◽  
Cong Xu ◽  
Jia-Gui Wu

Author(s):  
Dandong Zhao ◽  
Haishi Zhao ◽  
Renchu Guan ◽  
Chen Yang

Building extraction with high spatial resolution images becomes an important research in the field of computer vision for urban-related applications. Due to the rich detailed information and complex texture features presented in high spatial resolution images, the distribution of buildings is non-proportional and their difference of scales is obvious. General methods often provide confusion results with other ground objects. In this paper, a building extraction framework based on deep residual neural network with a self-attention mechanism is proposed. This mechanism contains two parts: one is the spatial attention module, which is used to aggregate and relate the local and global features at each position (short and long distance context information) of buildings; the other is channel attention module, in which the representation of comprehensive features (includes color, texture, geometric and high-level semantic feature) are improved. The combination of the dual attention modules makes buildings can be extracted from the complex backgrounds. The effectiveness of our method is validated by the experiments counted on a wide range high spatial resolution image, i.e., Jilin-1 Gaofen 02A imagery. Compared with some state-of-the-art segmentation methods, i.e., DeepLab-v3+, PSPNet, and PSANet algorithms, the proposed dual attention network-based method achieved high accuracy and intersection-over-union for extraction performance and show finest recognition integrity of buildings.


Sign in / Sign up

Export Citation Format

Share Document