scholarly journals Semantically Guided Large Deformation Estimation with Deep Networks

Sensors ◽  
2020 ◽  
Vol 20 (5) ◽  
pp. 1392
Author(s):  
In Young Ha ◽  
Matthias Wilms ◽  
Mattias Heinrich

Deformable image registration is still a challenge when the considered images have strong variations in appearance and large initial misalignment. A huge performance gap currently remains for fast-moving regions in videos or strong deformations of natural objects. We present a new semantically guided and two-step deep deformation network that is particularly well suited for the estimation of large deformations. We combine a U-Net architecture that is weakly supervised with segmentation information to extract semantically meaningful features with multiple stages of nonrigid spatial transformer networks parameterized with low-dimensional B-spline deformations. Combining alignment loss and semantic loss functions together with a regularization penalty to obtain smooth and plausible deformations, we achieve superior results in terms of alignment quality compared to previous approaches that have only considered a label-driven alignment loss. Our network model advances the state of the art for inter-subject face part alignment and motion tracking in medical cardiac magnetic resonance imaging (MRI) sequences in comparison to the FlowNet and Label-Reg, two recent deep-learning registration frameworks. The models are compact, very fast in inference, and demonstrate clear potential for a variety of challenging tracking and/or alignment tasks in computer vision and medical image analysis.

2021 ◽  
Vol 7 (2) ◽  
pp. 19
Author(s):  
Tirivangani Magadza ◽  
Serestina Viriri

Quantitative analysis of the brain tumors provides valuable information for understanding the tumor characteristics and treatment planning better. The accurate segmentation of lesions requires more than one image modalities with varying contrasts. As a result, manual segmentation, which is arguably the most accurate segmentation method, would be impractical for more extensive studies. Deep learning has recently emerged as a solution for quantitative analysis due to its record-shattering performance. However, medical image analysis has its unique challenges. This paper presents a review of state-of-the-art deep learning methods for brain tumor segmentation, clearly highlighting their building blocks and various strategies. We end with a critical discussion of open challenges in medical image analysis.


2011 ◽  
Vol 58-60 ◽  
pp. 2370-2375
Author(s):  
Wei Li Ding ◽  
Feng Jiang ◽  
Jia Qing Yan

Magnetic Resonance Imaging (MRI) has been widely used in clinical diagnose. Segmentation of these images obtained by MRI is a necessary procedure in medical image processing. In this paper, an improved level set algorithm was proposed to optimize the segmentation of MRI image sequences based on article [1]. Firstly, we add an area term and the edge indicator function to the total energy function for single image segmentation. Secondly, we presented a new method which uses the circumscribed polygon of the previous segmentation result as the initial contour of the next image to achieve automatic segmentation of image sequences. The algorithm was tested on MRI image sequences provided by Chuiyanliu Hospital, Chaoyang District of Beijing; the results have indicated that the proposed algorithm can effectively enhance the segmentation speed and quality of MRI sequences.


Author(s):  
Vincent Christlein ◽  
Florin C. Ghesu ◽  
Tobias Würfl ◽  
Andreas Maier ◽  
Fabian Isensee ◽  
...  

Electronics ◽  
2021 ◽  
Vol 10 (17) ◽  
pp. 2132
Author(s):  
Kyriakos D. Apostolidis ◽  
George A. Papakostas

In the past years, deep neural networks (DNN) have become popular in many disciplines such as computer vision (CV), natural language processing (NLP), etc. The evolution of hardware has helped researchers to develop many powerful Deep Learning (DL) models to face numerous challenging problems. One of the most important challenges in the CV area is Medical Image Analysis in which DL models process medical images—such as magnetic resonance imaging (MRI), X-ray, computed tomography (CT), etc.—using convolutional neural networks (CNN) for diagnosis or detection of several diseases. The proper function of these models can significantly upgrade the health systems. However, recent studies have shown that CNN models are vulnerable under adversarial attacks with imperceptible perturbations. In this paper, we summarize existing methods for adversarial attacks, detections and defenses on medical imaging. Finally, we show that many attacks, which are undetectable by the human eye, can degrade the performance of the models, significantly. Nevertheless, some effective defense and attack detection methods keep the models safe to an extent. We end with a discussion on the current state-of-the-art and future challenges.


Diagnostics ◽  
2021 ◽  
Vol 11 (11) ◽  
pp. 1964
Author(s):  
Reza Kalantar ◽  
Gigin Lin ◽  
Jessica M. Winfield ◽  
Christina Messiou ◽  
Susan Lalondrelle ◽  
...  

The recent rise of deep learning (DL) and its promising capabilities in capturing non-explicit detail from large datasets have attracted substantial research attention in the field of medical image processing. DL provides grounds for technological development of computer-aided diagnosis and segmentation in radiology and radiation oncology. Amongst the anatomical locations where recent auto-segmentation algorithms have been employed, the pelvis remains one of the most challenging due to large intra- and inter-patient soft-tissue variabilities. This review provides a comprehensive, non-systematic and clinically-oriented overview of 74 DL-based segmentation studies, published between January 2016 and December 2020, for bladder, prostate, cervical and rectal cancers on computed tomography (CT) and magnetic resonance imaging (MRI), highlighting the key findings, challenges and limitations.


Medical Image analysis has gained momentum in the research since last ten years. Medical images of different modalities like X-rays, Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Ultrasound etc. are generated with an increase of 15% to 20% every year. Medical image analysis requires high processing power and huge memory for storing the medical images, processing them, extracting features for useful information and segment the interested area for analysis. Thus, here comes the role of deep learning which proves to be promising for medical image analysis. The major focus of the paper is on exploring the literature on the broad areas of medical image analysis like Image Classification, Tumor/lesion classification and detection, Organ/Sub-structure Segmentation, Image Registration and Image Construction/ Enhancement using deep learning. Paper also highlights the physiological and medical challenges to be taken care, while analyzing medical images. It also discusses the technical challenges of using deep learning for medical image analysis and its solutions.


2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Jayaraman J. Thiagarajan ◽  
Kowshik Thopalli ◽  
Deepta Rajan ◽  
Pavan Turaga

AbstractThe rapid adoption of artificial intelligence methods in healthcare is coupled with the critical need for techniques to rigorously introspect models and thereby ensure that they behave reliably. This has led to the design of explainable AI techniques that uncover the relationships between discernible data signatures and model predictions. In this context, counterfactual explanations that synthesize small, interpretable changes to a given query while producing desired changes in model predictions have become popular. This under-constrained, inverse problem is vulnerable to introducing irrelevant feature manipulations, particularly when the model’s predictions are not well-calibrated. Hence, in this paper, we propose the TraCE (training calibration-based explainers) technique, which utilizes a novel uncertainty-based interval calibration strategy for reliably synthesizing counterfactuals. Given the wide-spread adoption of machine-learned solutions in radiology, our study focuses on deep models used for identifying anomalies in chest X-ray images. Using rigorous empirical studies, we demonstrate the superiority of TraCE explanations over several state-of-the-art baseline approaches, in terms of several widely adopted evaluation metrics. Our findings show that TraCE can be used to obtain a holistic understanding of deep models by enabling progressive exploration of decision boundaries, to detect shortcuts, and to infer relationships between patient attributes and disease severity.


2005 ◽  
Author(s):  
Xenophon Papademetris ◽  
Marcel Jackowski ◽  
Nallakkandi Rajeevan ◽  
R. Todd Constable ◽  
Lawrence Staib

BioImage Suite is an integrated image analysis software suite developed at Yale. It uses a combination of C++ and Tcl in the same fashion as that pioneered by the Visualization Toolkit (VTK) and it leverages both VTK and the Insight Toolkit. It has extensive capabilities for both neuro/cardiac and abdominal image analysis and state of the art visualization. It is currently in use at Yale; a first public release is expected before the end of 2005.


Sign in / Sign up

Export Citation Format

Share Document