scholarly journals Three-Dimensional Vessel Segmentation in Whole-Tissue and Whole-Block Imaging Using a Deep Neural Network

Author(s):  
Takashi Ohnishi ◽  
Alexei Teplov ◽  
Noboru Kawata ◽  
Kareem Ibrahim ◽  
Peter Ntiamoah ◽  
...  
2021 ◽  
pp. 1-15
Author(s):  
Wenjun Tan ◽  
Luyu Zhou ◽  
Xiaoshuo Li ◽  
Xiaoyu Yang ◽  
Yufei Chen ◽  
...  

BACKGROUND: The distribution of pulmonary vessels in computed tomography (CT) and computed tomography angiography (CTA) images of lung is important for diagnosing disease, formulating surgical plans and pulmonary research. PURPOSE: Based on the pulmonary vascular segmentation task of International Symposium on Image Computing and Digital Medicine 2020 challenge, this paper reviews 12 different pulmonary vascular segmentation algorithms of lung CT and CTA images and then objectively evaluates and compares their performances. METHODS: First, we present the annotated reference dataset of lung CT and CTA images. A subset of the dataset consisting 7,307 slices for training and 3,888 slices for testing was made available for participants. Second, by analyzing the performance comparison of different convolutional neural networks from 12 different institutions for pulmonary vascular segmentation, the reasons for some defects and improvements are summarized. The models are mainly based on U-Net, Attention, GAN, and multi-scale fusion network. The performance is measured in terms of Dice coefficient, over segmentation ratio and under segmentation rate. Finally, we discuss several proposed methods to improve the pulmonary vessel segmentation results using deep neural networks. RESULTS: By comparing with the annotated ground truth from both lung CT and CTA images, most of 12 deep neural network algorithms do an admirable job in pulmonary vascular extraction and segmentation with the dice coefficients ranging from 0.70 to 0.85. The dice coefficients for the top three algorithms are about 0.80. CONCLUSIONS: Study results show that integrating methods that consider spatial information, fuse multi-scale feature map, or have an excellent post-processing to deep neural network training and optimization process are significant for further improving the accuracy of pulmonary vascular segmentation.


2019 ◽  
Vol 40 (Supplement_1) ◽  
Author(s):  
M S Huang ◽  
M R Tsai

Abstract Background The deep neural network assisted in automated echocardiography interpretation joint to cardiologist final confirmation has now been gradually emerging. There were applications applied in echocardiography views classification, chamber size and myocardium mass evaluation, and certain disease detections already published. Our aim, instead of frame-by-frame “image-level” interpretation in previous studies, is to apply deep neural network in echocardiography temporal relationship analysis – “video-level” – and applied in automated left ventricle myocardium regional wall motion abnormalities recognition. Methods We collected all echocardiography performed in 2017, and preprocessed them into numeric arrays for matrix computations. Regional wall motion abnormalities were approved by authorized cardiologists, and processed into labels whether regional wall motion abnormalities presented in anterior, inferior, septal, or lateral walls of the left ventricle, as the ground truth. We then first developed a convolutional neural network (CNN) model to do view selection, and gathered parasternal long/short views, and apical four/two chamber views from each exam, as well as developing view prediction confidence for strict image quality control. Within these images, we annotated part of images to develop the second CNN model, known as U-net, for image segmentation and mark each regional wall. Finally, we developed the major three-dimensional CNN model with the inputs composed of four views of echocardiography videos and then output the final label for motion abnormalities in each wall. Results In total we collected 13,984 series of echocardiography, and gathered four main views with quality confidence level above 90%, which resulted in 9,323 series for training. Within these images, we annotated 2,736 frames for U-net model and resulted in dice score of segmentation 73%. With the join of segmentation model, the final three-dimensional CNN model predict regional wall motion with accuracy of 83%. Conclusions Deep neural network application in regional wall motion recognition is feasible and should mandate further investigation for promoting performance. Acknowledgement/Funding None


2019 ◽  
Vol 357 ◽  
pp. 151-162 ◽  
Author(s):  
Keyu Wu ◽  
Mahdi Abolfazli Esfahani ◽  
Shenghai Yuan ◽  
Han Wang

Sensors ◽  
2020 ◽  
Vol 20 (11) ◽  
pp. 3226
Author(s):  
Radu Mirsu ◽  
Georgiana Simion ◽  
Catalin Daniel Caleanu ◽  
Ioana Monica Pop-Calimanu

Gesture recognition is an intensively researched area for several reasons. One of the most important reasons is because of this technology’s numerous application in various domains (e.g., robotics, games, medicine, automotive, etc.) Additionally, the introduction of three-dimensional (3D) image acquisition techniques (e.g., stereovision, projected-light, time-of-flight, etc.) overcomes the limitations of traditional two-dimensional (2D) approaches. Combined with the larger availability of 3D sensors (e.g., Microsoft Kinect, Intel RealSense, photonic mixer device (PMD), CamCube, etc.), recent interest in this domain has sparked. Moreover, in many computer vision tasks, the traditional statistic top approaches were outperformed by deep neural network-based solutions. In view of these considerations, we proposed a deep neural network solution by employing PointNet architecture for the problem of hand gesture recognition using depth data produced by a time of flight (ToF) sensor. We created a custom hand gesture dataset, then proposed a multistage hand segmentation by designing filtering, clustering, and finding the hand in the volume of interest and hand-forearm segmentation. For comparison purpose, two equivalent datasets were tested: a 3D point cloud dataset and a 2D image dataset, both obtained from the same stream. Besides the advantages of the 3D technology, the accuracy of the 3D method using PointNet is proven to outperform the 2D method in all circumstances, even the 2D method that employs a deep neural network.


Author(s):  
Kazuki Nagasawa ◽  
Kensuke Fukumoto ◽  
Wataru Arai ◽  
Kunio Hakkaku ◽  
Satoshi Kaneko ◽  
...  

In this article, the authors propose a method to estimate the ink layer layout for a three-dimensional (3D) printer. This enables 3D printed skin to be produced with the desired translucency, which they represent as line spread function (LSF). A deep neural network in an encoder–decoder model is used for the estimation. It was previously reported that machine learning is an effective way to formulate the complex relationship between optical properties such as LSF and the ink layer layout in a 3D printer. However, although 3D printers are more widespread, the printing process is still time-consuming. Hence, it may be difficult to collect enough data to train a neural network sufficiently. Therefore, in this research, they prepare the training data, which is the correspondence between an LSF and the ink layer layout in a 3D printer, via computer simulation. They use a method to simulate the subsurface scattering of light for multilayered media. The deep neural network was trained with the simulated data and evaluated using a CG skin object. The result shows that their proposed method can estimate an appropriate ink layer layout that closely reproduces the target color and translucency.


Author(s):  
Jiaqi Ding ◽  
Zehua Zhang ◽  
Jijun Tang ◽  
Fei Guo

Changes in fundus blood vessels reflect the occurrence of eye diseases, and from this, we can explore other physical diseases that cause fundus lesions, such as diabetes and hypertension complication. However, the existing computational methods lack high efficiency and precision segmentation for the vascular ends and thin retina vessels. It is important to construct a reliable and quantitative automatic diagnostic method for improving the diagnosis efficiency. In this study, we propose a multichannel deep neural network for retina vessel segmentation. First, we apply U-net on original and thin (or thick) vessels for multi-objective optimization for purposively training thick and thin vessels. Then, we design a specific fusion mechanism for combining three kinds of prediction probability maps into a final binary segmentation map. Experiments show that our method can effectively improve the segmentation performances of thin blood vessels and vascular ends. It outperforms many current excellent vessel segmentation methods on three public datasets. In particular, it is pretty impressive that we achieve the best F1-score of 0.8247 on the DRIVE dataset and 0.8239 on the STARE dataset. The findings of this study have the potential for the application in an automated retinal image analysis, and it may provide a new, general, and high-performance computing framework for image segmentation.


Sign in / Sign up

Export Citation Format

Share Document