Orientation estimation of anatomical structures in medical images for object recognition

Author(s):  
Ulaş Bağci ◽  
Jayaram K. Udupa ◽  
Xinjian Chen
Sensors ◽  
2020 ◽  
Vol 20 (10) ◽  
pp. 2962 ◽  
Author(s):  
Santiago González Izard ◽  
Ramiro Sánchez Torres ◽  
Óscar Alonso Plaza ◽  
Juan Antonio Juanes Méndez ◽  
Francisco José García-Peñalvo

The visualization of medical images with advanced techniques, such as augmented reality and virtual reality, represent a breakthrough for medical professionals. In contrast to more traditional visualization tools lacking 3D capabilities, these systems use the three available dimensions. To visualize medical images in 3D, the anatomical areas of interest must be segmented. Currently, manual segmentation, which is the most commonly used technique, and semi-automatic approaches can be time consuming because a doctor is required, making segmentation for each individual case unfeasible. Using new technologies, such as computer vision and artificial intelligence for segmentation algorithms and augmented and virtual reality for visualization techniques implementation, we designed a complete platform to solve this problem and allow medical professionals to work more frequently with anatomical 3D models obtained from medical imaging. As a result, the Nextmed project, due to the different implemented software applications, permits the importation of digital imaging and communication on medicine (dicom) images on a secure cloud platform and the automatic segmentation of certain anatomical structures with new algorithms that improve upon the current research results. A 3D mesh of the segmented structure is then automatically generated that can be printed in 3D or visualized using both augmented and virtual reality, with the designed software systems. The Nextmed project is unique, as it covers the whole process from uploading dicom images to automatic segmentation, 3D reconstruction, 3D visualization, and manipulation using augmented and virtual reality. There are many researches about application of augmented and virtual reality for medical image 3D visualization; however, they are not automated platforms. Although some other anatomical structures can be studied, we focused on one case: a lung study. Analyzing the application of the platform to more than 1000 dicom images and studying the results with medical specialists, we concluded that the installation of this system in hospitals would provide a considerable improvement as a tool for medical image visualization.


2017 ◽  
Vol 36 (7) ◽  
pp. 1470-1481 ◽  
Author(s):  
Bob D. de Vos ◽  
Jelmer M. Wolterink ◽  
Pim A. de Jong ◽  
Tim Leiner ◽  
Max A. Viergever ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3249
Author(s):  
Jaemoon Hwang ◽  
Sangheum Hwang

In this paper, we propose a method to enhance the performance of segmentation models for medical images. The method is based on convolutional neural networks that learn the global structure information, which corresponds to anatomical structures in medical images. Specifically, the proposed method is designed to learn the global boundary structures via an autoencoder and constrain a segmentation network through a loss function. In this manner, the segmentation model performs the prediction in the learned anatomical feature space. Unlike previous studies that considered anatomical priors by using a pre-trained autoencoder to train segmentation networks, we propose a single-stage approach in which the segmentation network and autoencoder are jointly learned. To verify the effectiveness of the proposed method, the segmentation performance is evaluated in terms of both the overlap and distance metrics on the lung area and spinal cord segmentation tasks. The experimental results demonstrate that the proposed method can enhance not only the segmentation performance but also the robustness against domain shifts.


Sign in / Sign up

Export Citation Format

Share Document