scholarly journals PFCN: a fully convolutional network for point cloud semantic segmentation

2019 ◽  
Vol 55 (20) ◽  
pp. 1088-1090
Author(s):  
Jian Lu ◽  
Tong Liu ◽  
Maoxin Luo ◽  
Haozhe Cheng ◽  
Kaibing Zhang
IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 673-682
Author(s):  
Jian Ji ◽  
Xiaocong Lu ◽  
Mai Luo ◽  
Minghui Yin ◽  
Qiguang Miao ◽  
...  

2021 ◽  
Vol 13 (5) ◽  
pp. 1003
Author(s):  
Nan Luo ◽  
Hongquan Yu ◽  
Zhenfeng Huo ◽  
Jinhui Liu ◽  
Quan Wang ◽  
...  

Semantic segmentation of the sensed point cloud data plays a significant role in scene understanding and reconstruction, robot navigation, etc. This work presents a Graph Convolutional Network integrating K-Nearest Neighbor searching (KNN) and Vector of Locally Aggregated Descriptors (VLAD). KNN searching is utilized to construct the topological graph of each point and its neighbors. Then, we perform convolution on the edges of constructed graph to extract representative local features by multiple Multilayer Perceptions (MLPs). Afterwards, a trainable VLAD layer, NetVLAD, is embedded in the feature encoder to aggregate the local and global contextual features. The designed feature encoder is repeated for multiple times, and the extracted features are concatenated in a jump-connection style to strengthen the distinctiveness of features and thereby improve the segmentation. Experimental results on two datasets show that the proposed work settles the shortcoming of insufficient local feature extraction and promotes the accuracy (mIoU 60.9% and oAcc 87.4% for S3DIS) of semantic segmentation comparing to existing models.


2021 ◽  
Vol 13 (16) ◽  
pp. 3211
Author(s):  
Tian Tian ◽  
Zhengquan Chu ◽  
Qian Hu ◽  
Li Ma

Semantic segmentation is a fundamental task in remote sensing image interpretation, which aims to assign a semantic label for every pixel in the given image. Accurate semantic segmentation is still challenging due to the complex distributions of various ground objects. With the development of deep learning, a series of segmentation networks represented by fully convolutional network (FCN) has made remarkable progress on this problem, but the segmentation accuracy is still far from expectations. This paper focuses on the importance of class-specific features of different land cover objects, and presents a novel end-to-end class-wise processing framework for segmentation. The proposed class-wise FCN (C-FCN) is shaped in the form of an encoder-decoder structure with skip-connections, in which the encoder is shared to produce general features for all categories and the decoder is class-wise to process class-specific features. To be detailed, class-wise transition (CT), class-wise up-sampling (CU), class-wise supervision (CS), and class-wise classification (CC) modules are designed to achieve the class-wise transfer, recover the resolution of class-wise feature maps, bridge the encoder and modified decoder, and implement class-wise classifications, respectively. Class-wise and group convolutions are adopted in the architecture with regard to the control of parameter numbers. The method is tested on the public ISPRS 2D semantic labeling benchmark datasets. Experimental results show that the proposed C-FCN significantly improves the segmentation performances compared with many state-of-the-art FCN-based networks, revealing its potentials on accurate segmentation of complex remote sensing images.


2021 ◽  
Vol 1952 (2) ◽  
pp. 022019
Author(s):  
Gaihua Wang ◽  
Xizhou Wan ◽  
Xu Zheng ◽  
Zhao Guo

2021 ◽  
Vol 233 ◽  
pp. 01032
Author(s):  
Zhang Jun ◽  
Duan Xiaoli ◽  
Xie Yi ◽  
Duan Jianjia ◽  
Huang Fuyong ◽  
...  

A semantic segmentation method based on the fully convolutional network is proposed to detect the buffer layer defect in high voltage cable automatically. One hundred seventy-seven high-resolution X-ray images of cables are collected. FCN-8s and VGG16 backbone are adopted. The results indicated that the FCN-8s achieves the mIoU to 0.67 on the test set, proving to be an efficient way to detect the buffer layer defects.


Sign in / Sign up

Export Citation Format

Share Document