scholarly journals Metallic debossed characters industrial online non‐segmentation identification based on improved multi‐scale image fusion enhancement and deep neural network

2021 ◽  
Author(s):  
Zhong Xiang ◽  
Huaxiong Wu ◽  
Ding Zhou
2021 ◽  
pp. 1-15
Author(s):  
Wenjun Tan ◽  
Luyu Zhou ◽  
Xiaoshuo Li ◽  
Xiaoyu Yang ◽  
Yufei Chen ◽  
...  

BACKGROUND: The distribution of pulmonary vessels in computed tomography (CT) and computed tomography angiography (CTA) images of lung is important for diagnosing disease, formulating surgical plans and pulmonary research. PURPOSE: Based on the pulmonary vascular segmentation task of International Symposium on Image Computing and Digital Medicine 2020 challenge, this paper reviews 12 different pulmonary vascular segmentation algorithms of lung CT and CTA images and then objectively evaluates and compares their performances. METHODS: First, we present the annotated reference dataset of lung CT and CTA images. A subset of the dataset consisting 7,307 slices for training and 3,888 slices for testing was made available for participants. Second, by analyzing the performance comparison of different convolutional neural networks from 12 different institutions for pulmonary vascular segmentation, the reasons for some defects and improvements are summarized. The models are mainly based on U-Net, Attention, GAN, and multi-scale fusion network. The performance is measured in terms of Dice coefficient, over segmentation ratio and under segmentation rate. Finally, we discuss several proposed methods to improve the pulmonary vessel segmentation results using deep neural networks. RESULTS: By comparing with the annotated ground truth from both lung CT and CTA images, most of 12 deep neural network algorithms do an admirable job in pulmonary vascular extraction and segmentation with the dice coefficients ranging from 0.70 to 0.85. The dice coefficients for the top three algorithms are about 0.80. CONCLUSIONS: Study results show that integrating methods that consider spatial information, fuse multi-scale feature map, or have an excellent post-processing to deep neural network training and optimization process are significant for further improving the accuracy of pulmonary vascular segmentation.


2015 ◽  
Vol 108 (2) ◽  
pp. 473a ◽  
Author(s):  
Xundong Wu ◽  
Yong Wu ◽  
Enrico Stefani

2021 ◽  
Vol 2021 (1) ◽  
pp. 5-10
Author(s):  
Chahine Nicolas ◽  
Belkarfa Salim

In this paper, we propose a novel and standardized approach to the problem of camera-quality assessment on portrait scenes. Our goal is to evaluate the capacity of smartphone front cameras to preserve texture details on faces. We introduce a new portrait setup and an automated texture measurement. The setup includes two custom-built lifelike mannequin heads, shot in a controlled lab environment. The automated texture measurement includes a Region-of-interest (ROI) detection and a deep neural network. To this aim, we create a realistic mannequins database, which contains images from different cameras, shot in several lighting conditions. The ground-truth is based on a novel pairwise comparison technology where the scores are generated in terms of Just-Noticeable-differences (JND). In terms of methodology, we propose a Multi-Scale CNN architecture with random crop augmentation, to overcome overfitting and to get a low-level feature extraction. We validate our approach by comparing its performance with several baselines inspired by the Image Quality Assessment (IQA) literature.


AIP Advances ◽  
2021 ◽  
Vol 11 (12) ◽  
pp. 125025
Author(s):  
Haitao He ◽  
Shuanfeng Zhao ◽  
Wei Guo ◽  
Yuan Wang ◽  
Zhizhong Xing ◽  
...  

Entropy ◽  
2019 ◽  
Vol 21 (6) ◽  
pp. 570 ◽  
Author(s):  
Jingchun Piao ◽  
Yunfan Chen ◽  
Hyunchul Shin

In this paper, we present a new effective infrared (IR) and visible (VIS) image fusion method by using a deep neural network. In our method, a Siamese convolutional neural network (CNN) is applied to automatically generate a weight map which represents the saliency of each pixel for a pair of source images. A CNN plays a role in automatic encoding an image into a feature domain for classification. By applying the proposed method, the key problems in image fusion, which are the activity level measurement and fusion rule design, can be figured out in one shot. The fusion is carried out through the multi-scale image decomposition based on wavelet transform, and the reconstruction result is more perceptual to a human visual system. In addition, the visual qualitative effectiveness of the proposed fusion method is evaluated by comparing pedestrian detection results with other methods, by using the YOLOv3 object detector using a public benchmark dataset. The experimental results show that our proposed method showed competitive results in terms of both quantitative assessment and visual quality.


Author(s):  
Weijie Yang ◽  
Yueting Hui

Image scene analysis is to analyze image scene content through image semantic segmentation, which can identify the categories and positions of different objects in an image. However, due to the loss of spatial detail information, the accuracy of image scene analysis is often affected, resulting in rough edges of FCN, inconsistent class labels of target regions and missing small targets. To address these problems, this paper increases the receptive field, conducts multi-scale fusion and changes the weight of different sensitive channels, so as to improve the feature discrimination and maintain or restore spatial detail information. Furthermore, the deep neural network FCN is used to build the base model of semantic segmentation. The ASPP, data augmentation, SENet, decoder and global pooling are added to the baseline to optimize the model structure and improve the effect of semantic segmentation. Finally, the more accurate results of scene analysis are obtained.


Sign in / Sign up

Export Citation Format

Share Document