Human pulmonary acinar airspace segmentation from three-dimensional synchrotron radiation micro CT images of secondary pulmonary lobule

2011 ◽  
Author(s):  
Y. Kawata ◽  
T. Hosokawa ◽  
N. Niki ◽  
K. Umetani ◽  
Y. Nakano ◽  
...  
Author(s):  
Pei Dong ◽  
Sebastien Valette ◽  
Maria A. Zuluaga ◽  
Galateia J. Kazakia ◽  
Francoise Peyrin

2007 ◽  
Author(s):  
Lijun Shi ◽  
Jacqueline Thiesse ◽  
Geoffrey McLennan ◽  
Eric A. Hoffman ◽  
Joseph M. Reinhardt

2014 ◽  
Vol 41 (10) ◽  
pp. 101904 ◽  
Author(s):  
Jianzhong Hu ◽  
Yong Cao ◽  
Tianding Wu ◽  
Dongzhe Li ◽  
Hongbin Lu

2021 ◽  
Author(s):  
Evropi Toulkeridou ◽  
Carlos Enrique Gutierrez ◽  
Daniel Baum ◽  
Kenji Doya ◽  
Evan P Economo

Three-dimensional (3D) imaging, such as micro-computed tomography (micro-CT), is increasingly being used by organismal biologists for precise and comprehensive anatomical characterization. However, the segmentation of anatomical structures remains a bottleneck in research, often requiring tedious manual work. Here, we propose a pipeline for the fully-automated segmentation of anatomical structures in micro-CT images utilizing state-of-the-art deep learning methods, selecting the ant brain as a testcase. We implemented the U-Net architecture for 2D image segmentation for our convolutional neural network (CNN), combined with pixel-island detection. For training and validation of the network, we assembled a dataset of semi-manually segmented brain images of 94 ant species. The trained network predicted the brain area in ant images fast and accurately; its performance tested on validation sets showed good agreement between the prediction and the target, scoring 80% Intersection over Union(IoU) and 90% Dice Coefficient (F1) accuracy. While manual segmentation usually takes many hours for each brain, the trained network takes only a few minutes.Furthermore, our network is generalizable for segmenting the whole neural system in full-body scans, and works in tests on distantly related and morphologically divergent insects (e.g., fruit flies). The latter suggest that methods like the one presented here generally apply across diverse taxa. Our method makes the construction of segmented maps and the morphological quantification of different species more efficient and scalable to large datasets, a step toward a big data approach to organismal anatomy.


2021 ◽  
pp. rapm-2021-102588
Author(s):  
Tae-Hyeon Cho ◽  
Shin Hyung Kim ◽  
Jehoon O ◽  
Hyun-Jin Kwon ◽  
Ki Wook Kim ◽  
...  

BackgroundA precise anatomical understanding of the thoracic paravertebral space (TPVS) is essential to understanding how an injection outside this space can result in paravertebral spread. Therefore, we aimed to clarify the three-dimensional (3D) structures of the TPVS and adjacent tissues using micro-CT, and investigate the potential routes for nerve blockade in this area.MethodsEleven embalmed cadavers were used in this study. Micro-CT images of the TPVS were acquired after phosphotungstic acid preparation at the mid-thoracic region. The TPVS was examined meticulously based on its 3D topography.ResultsMicro-CT images clearly showed the serial topography of the TPVS and its adjacent spaces. First, the TPVS was a very narrow space with the posterior intercostal vessels very close to the pleura. Second, the superior costotransverse ligament (SCTL) incompletely formed the posterior wall of the TPVS between the internal intercostal membrane and vertebral body. Third, the retro-SCTL space broadly communicated with the TPVS via slits, costotransverse space, intervertebral foramen, and erector spinae compartment. Fourth, the costotransverse space was intersegmentally connected to the adjacent retro-SCTL space.ConclusionsA non-destructive, multi-sectional approach using 3D micro-CT more comprehensively demonstrated the real topography of the intricate TPVS than previous cadaver studies. The posterior boundary and connectivity of the TPVS provides an anatomical rationale for the notion that paravertebral spread can be achieved with an injection outside this space.


Bone ◽  
2014 ◽  
Vol 60 ◽  
pp. 172-185 ◽  
Author(s):  
Pei Dong ◽  
Sylvain Haupert ◽  
Bernhard Hesse ◽  
Max Langer ◽  
Pierre-Jean Gouttenoire ◽  
...  

PLoS ONE ◽  
2011 ◽  
Vol 6 (7) ◽  
pp. e21297 ◽  
Author(s):  
Aymeric Larrue ◽  
Aline Rattner ◽  
Zsolt-Andrei Peter ◽  
Cécile Olivier ◽  
Norbert Laroche ◽  
...  

Radiology ◽  
2003 ◽  
Vol 229 (3) ◽  
pp. 921-928 ◽  
Author(s):  
Estela Martín-Badosa ◽  
Daniel Amblard ◽  
Stefania Nuzzo ◽  
Abdelmajid Elmoutaouakkil ◽  
Laurence Vico ◽  
...  

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Johan Phan ◽  
Leonardo C. Ruspini ◽  
Frank Lindseth

AbstractObtaining an accurate segmentation of images obtained by computed microtomography (micro-CT) techniques is a non-trivial process due to the wide range of noise types and artifacts present in these images. Current methodologies are often time-consuming, sensitive to noise and artifacts, and require skilled people to give accurate results. Motivated by the rapid advancement of deep learning-based segmentation techniques in recent years, we have developed a tool that aims to fully automate the segmentation process in one step, without the need for any extra image processing steps such as noise filtering or artifact removal. To get a general model, we train our network using a dataset made of high-quality three-dimensional micro-CT images from different scanners, rock types, and resolutions. In addition, we use a domain-specific augmented training pipeline with various types of noise, synthetic artifacts, and image transformation/distortion. For validation, we use a synthetic dataset to measure accuracy and analyze noise/artifact sensitivity. The results show a robust and accurate segmentation performance for the most common types of noises present in real micro-CT images. We also compared the segmentation of our method and five expert users, using commercial and open software packages on real rock images. We found that most of the current tools fail to reduce the impact of local and global noises and artifacts. We quantified the variation on human-assisted segmentation results in terms of physical properties and observed a large variation. In comparison, the new method is more robust to local noises and artifacts, outperforming the human segmentation and giving consistent results. Finally, we compared the porosity of our model segmented images with experimental porosity measured in the laboratory for ten different untrained samples, finding very encouraging results.


Sign in / Sign up

Export Citation Format

Share Document