A Digital Pathology Platform for Artificial Intelligence Data Sharing (Preprint)

2020 ◽  
Author(s):  
Yunsook Kang ◽  
Yoo Jung Kim ◽  
Seongkeun Park ◽  
Gun Ro ◽  
Choyeon Hong ◽  
...  

BACKGROUND High-quality learning materials are needed for artificial intelligence (AI) development, but are not practically available; this situation is especially poor in the medical field. In particular, annotating medical images (e.g., annotation for tumor area by pathologists) is massive as well as expensive, and subject to privacy protection. These are major limitations for AI developers to approach and reproduce medical image data. OBJECTIVE This study aimed to reduce barriers for AI researchers to access medical image datasets by collating and sharing high-quality medical images with pathologists, and to find applicable ways to apply diagnostic AI assistance to reduce the pathologists’ workload. METHODS Pathology slides of tumors of five organs (liver, colon, prostate, pancreas and biliary tract, and kidney) from histologically confirmed cases were selected for this study. After scanning the slides to obtain whole slide digital images, the patient information was de-identified, and annotation for the tumor area was performed by the pathologist. Next, an AI-assisted annotation process was used in parallel to improve the annotation workload of pathologists and to draw complex lesion boundaries more accurately. This allowed all the data to include the annotations confirmed by experienced pathologists, and to be used as an AI learning dataset. RESULTS A web-based data-sharing platform for AI learning was built, and was unveiled in 2019. In total, 3,100 massive datasets of 5 organ carcinomas were shared through this platform, and were accessible to all researchers. The platform had the advantage that users could search data visually and intuitively; except for commercial purposes, all researchers made free use of the provided dataset for their research. Finally, the platform also provided five image data pre-processing algorithms that could help AI modeling learners. CONCLUSIONS We built and operated a web-based data-sharing platform for AI researchers providing a high-quality digital pathology dataset personally annotated by pathologists. We hope that our experience will help researchers who want to build such a platform in future, by sharing issues gained from collecting and sharing these valuable data.

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Lukman E. Mansuri ◽  
D.A. Patel

PurposeHeritage is the latent part of a sustainable built environment. Conservation and preservation of heritage is one of the United Nations' (UN) sustainable development goals. Many social and natural factors seriously threaten heritage structures by deteriorating and damaging the original. Therefore, regular visual inspection of heritage structures is necessary for their conservation and preservation. Conventional inspection practice relies on manual inspection, which takes more time and human resources. The inspection system seeks an innovative approach that should be cheaper, faster, safer and less prone to human error than manual inspection. Therefore, this study aims to develop an automatic system of visual inspection for the built heritage.Design/methodology/approachThe artificial intelligence-based automatic defect detection system is developed using the faster R-CNN (faster region-based convolutional neural network) model of object detection to build an automatic visual inspection system. From the English and Dutch cemeteries of Surat (India), images of heritage structures were captured by digital camera to prepare the image data set. This image data set was used for training, validation and testing to develop the automatic defect detection model. While validating this model, its optimum detection accuracy is recorded as 91.58% to detect three types of defects: “spalling,” “exposed bricks” and “cracks.”FindingsThis study develops the model of automatic web-based visual inspection systems for the heritage structures using the faster R-CNN. Then it demonstrates detection of defects of spalling, exposed bricks and cracks existing in the heritage structures. Comparison of conventional (manual) and developed automatic inspection systems reveals that the developed automatic system requires less time and staff. Therefore, the routine inspection can be faster, cheaper, safer and more accurate than the conventional inspection method.Practical implicationsThe study presented here can improve inspecting the built heritages by reducing inspection time and cost, eliminating chances of human errors and accidents and having accurate and consistent information. This study attempts to ensure the sustainability of the built heritage.Originality/valueFor ensuring the sustainability of built heritage, this study presents the artificial intelligence-based methodology for the development of an automatic visual inspection system. The automatic web-based visual inspection system for the built heritage has not been reported in previous studies so far.


2019 ◽  
Vol 8 (4) ◽  
pp. 462 ◽  
Author(s):  
Muhammad Owais ◽  
Muhammad Arsalan ◽  
Jiho Choi ◽  
Kang Ryoung Park

Medical-image-based diagnosis is a tedious task‚ and small lesions in various medical images can be overlooked by medical experts due to the limited attention span of the human visual system, which can adversely affect medical treatment. However, this problem can be resolved by exploring similar cases in the previous medical database through an efficient content-based medical image retrieval (CBMIR) system. In the past few years, heterogeneous medical imaging databases have been growing rapidly with the advent of different types of medical imaging modalities. Recently, a medical doctor usually refers to various types of imaging modalities all together such as computed tomography (CT), magnetic resonance imaging (MRI), X-ray, and ultrasound, etc of various organs in order for the diagnosis and treatment of specific disease. Accurate classification and retrieval of multimodal medical imaging data is the key challenge for the CBMIR system. Most previous attempts use handcrafted features for medical image classification and retrieval, which show low performance for a massive collection of multimodal databases. Although there are a few previous studies on the use of deep features for classification, the number of classes is very small. To solve this problem, we propose the classification-based retrieval system of the multimodal medical images from various types of imaging modalities by using the technique of artificial intelligence, named as an enhanced residual network (ResNet). Experimental results with 12 databases including 50 classes demonstrate that the accuracy and F1.score by our method are respectively 81.51% and 82.42% which are higher than those by the previous method of CBMIR (the accuracy of 69.71% and F1.score of 69.63%).


Author(s):  
Caroline Bivik Stadler ◽  
Martin Lindvall ◽  
Claes Lundström ◽  
Anna Bodén ◽  
Karin Lindman ◽  
...  

Abstract Artificial intelligence (AI) holds much promise for enabling highly desired imaging diagnostics improvements. One of the most limiting bottlenecks for the development of useful clinical-grade AI models is the lack of training data. One aspect is the large amount of cases needed and another is the necessity of high-quality ground truth annotation. The aim of the project was to establish and describe the construction of a database with substantial amounts of detail-annotated oncology imaging data from pathology and radiology. A specific objective was to be proactive, that is, to support undefined subsequent AI training across a wide range of tasks, such as detection, quantification, segmentation, and classification, which puts particular focus on the quality and generality of the annotations. The main outcome of this project was the database as such, with a collection of labeled image data from breast, ovary, skin, colon, skeleton, and liver. In addition, this effort also served as an exploration of best practices for further scalability of high-quality image collections, and a main contribution of the study was generic lessons learned regarding how to successfully organize efforts to construct medical imaging databases for AI training, summarized as eight guiding principles covering team, process, and execution aspects.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Veturia Chiroiu ◽  
Ligia Munteanu ◽  
Rodica Ioan ◽  
Ciprian Dragne ◽  
Luciana Majercsik

AbstractThe inverse sonification problem is investigated in this article in order to detect hardly capturing details in a medical image. The direct problem consists in converting the image data into sound signals by a transformation which involves three steps - data, acoustics parameters and sound representations. The inverse problem is reversing back the sound signals into image data. By using the known sonification operator, the inverse approach does not bring any gain in the sonified medical imaging. The replication of the image already known does not help the diagnosis and surgical operation. In order to bring gains in the medical imaging, a new sonification operator is advanced in this paper, by using the Burgers equation of sound propagation. The sonified medical imaging is useful in interpreting the medical imaging that, however powerful they may be, are never good enough to aid tumour surgery. The inverse approach is exercised on several medical images used to surgical operations.


2020 ◽  
pp. 002215542095914
Author(s):  
A. Sally Davis ◽  
Mary Y. Chang ◽  
Jourdan E. Brune ◽  
Teal S. Hallstrand ◽  
Brian Johnson ◽  
...  

Advances in reagents, methodologies, analytic platforms, and tools have resulted in a dramatic transformation of the research pathology laboratory. These advances have increased our ability to efficiently generate substantial volumes of data on the expression and accumulation of mRNA, proteins, carbohydrates, signaling pathways, cells, and structures in healthy and diseased tissues that are objective, quantitative, reproducible, and suitable for statistical analysis. The goal of this review is to identify and present how to acquire the critical information required to measure changes in tissues. Included is a brief overview of two morphometric techniques, image analysis and stereology, and the use of artificial intelligence to classify cells and identify hidden patterns and relationships in digital images. In addition, we explore the importance of preanalytical factors in generating high-quality data. This review focuses on techniques we have used to measure proteoglycans, glycosaminoglycans, and immune cells in tissues using immunohistochemistry and in situ hybridization to demonstrate the various morphometric techniques. When performed correctly, quantitative digital pathology is a powerful tool that provides unbiased quantitative data that are difficult to obtain with other methods.


2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
R. Eswaraiah ◽  
E. Sreenivasa Reddy

In telemedicine while transferring medical images tampers may be introduced. Before making any diagnostic decisions, the integrity of region of interest (ROI) of the received medical image must be verified to avoid misdiagnosis. In this paper, we propose a novel fragile block based medical image watermarking technique to avoid embedding distortion inside ROI, verify integrity of ROI, detect accurately the tampered blocks inside ROI, and recover the original ROI with zero loss. In this proposed method, the medical image is segmented into three sets of pixels: ROI pixels, region of noninterest (RONI) pixels, and border pixels. Then, authentication data and information of ROI are embedded in border pixels. Recovery data of ROI is embedded into RONI. Results of experiments conducted on a number of medical images reveal that the proposed method produces high quality watermarked medical images, identifies the presence of tampers inside ROI with 100% accuracy, and recovers the original ROI without any loss.


2020 ◽  
Vol 237 (12) ◽  
pp. 1438-1441
Author(s):  
Soenke Langner ◽  
Ebba Beller ◽  
Felix Streckenbach

AbstractMedical images play an important role in ophthalmology and radiology. Medical image analysis has greatly benefited from the application of “deep learning” techniques in clinical and experimental radiology. Clinical applications and their relevance for radiological imaging in ophthalmology are presented.


Cancers ◽  
2021 ◽  
Vol 13 (7) ◽  
pp. 1590
Author(s):  
Laith Alzubaidi ◽  
Muthana Al-Amidie ◽  
Ahmed Al-Asadi ◽  
Amjad J. Humaidi ◽  
Omran Al-Shamma ◽  
...  

Deep learning requires a large amount of data to perform well. However, the field of medical image analysis suffers from a lack of sufficient data for training deep learning models. Moreover, medical images require manual labeling, usually provided by human annotators coming from various backgrounds. More importantly, the annotation process is time-consuming, expensive, and prone to errors. Transfer learning was introduced to reduce the need for the annotation process by transferring the deep learning models with knowledge from a previous task and then by fine-tuning them on a relatively small dataset of the current task. Most of the methods of medical image classification employ transfer learning from pretrained models, e.g., ImageNet, which has been proven to be ineffective. This is due to the mismatch in learned features between the natural image, e.g., ImageNet, and medical images. Additionally, it results in the utilization of deeply elaborated models. In this paper, we propose a novel transfer learning approach to overcome the previous drawbacks by means of training the deep learning model on large unlabeled medical image datasets and by next transferring the knowledge to train the deep learning model on the small amount of labeled medical images. Additionally, we propose a new deep convolutional neural network (DCNN) model that combines recent advancements in the field. We conducted several experiments on two challenging medical imaging scenarios dealing with skin and breast cancer classification tasks. According to the reported results, it has been empirically proven that the proposed approach can significantly improve the performance of both classification scenarios. In terms of skin cancer, the proposed model achieved an F1-score value of 89.09% when trained from scratch and 98.53% with the proposed approach. Secondly, it achieved an accuracy value of 85.29% and 97.51%, respectively, when trained from scratch and using the proposed approach in the case of the breast cancer scenario. Finally, we concluded that our method can possibly be applied to many medical imaging problems in which a substantial amount of unlabeled image data is available and the labeled image data is limited. Moreover, it can be utilized to improve the performance of medical imaging tasks in the same domain. To do so, we used the pretrained skin cancer model to train on feet skin to classify them into two classes—either normal or abnormal (diabetic foot ulcer (DFU)). It achieved an F1-score value of 86.0% when trained from scratch, 96.25% using transfer learning, and 99.25% using double-transfer learning.


2021 ◽  
Vol 3 (3) ◽  
pp. 740-770
Author(s):  
Samanta Knapič ◽  
Avleen Malhi ◽  
Rohit Saluja ◽  
Kary Främling

In this paper, we present the potential of Explainable Artificial Intelligence methods for decision support in medical image analysis scenarios. Using three types of explainable methods applied to the same medical image data set, we aimed to improve the comprehensibility of the decisions provided by the Convolutional Neural Network (CNN). In vivo gastral images obtained by a video capsule endoscopy (VCE) were the subject of visual explanations, with the goal of increasing health professionals’ trust in black-box predictions. We implemented two post hoc interpretable machine learning methods, called Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), and an alternative explanation approach, the Contextual Importance and Utility (CIU) method. The produced explanations were assessed by human evaluation. We conducted three user studies based on explanations provided by LIME, SHAP and CIU. Users from different non-medical backgrounds carried out a series of tests in a web-based survey setting and stated their experience and understanding of the given explanations. Three user groups (n = 20, 20, 20) with three distinct forms of explanations were quantitatively analyzed. We found that, as hypothesized, the CIU-explainable method performed better than both LIME and SHAP methods in terms of improving support for human decision-making and being more transparent and thus understandable to users. Additionally, CIU outperformed LIME and SHAP by generating explanations more rapidly. Our findings suggest that there are notable differences in human decision-making between various explanation support settings. In line with that, we present three potential explainable methods that, with future improvements in implementation, can be generalized to different medical data sets and can provide effective decision support to medical experts.


Sign in / Sign up

Export Citation Format

Share Document