scholarly journals Feasibility of Using Improved Convolutional Neural Network to Classify BI-RADS 4 Breast Lesions: Compare Deep Learning Features of the Lesion Itself and the Minimum Bounding Cube of Lesion

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Meihong Sheng ◽  
Weixia Tang ◽  
Jiahuan Tang ◽  
Ming Zhang ◽  
Shenchu Gong ◽  
...  

To determine the feasibility of using a deep learning (DL) approach to identify benign and malignant BI-RADS 4 lesions with preoperative breast DCE-MRI images and compare two 3D segmentation methods. The patients admitted from January 2014 to October 2020 were retrospectively analyzed. Breast MRI examination was performed before surgical resection or biopsy, and the masses were classified as BI-RADS 4. The first postcontrast images of DCE-MRI T1WI sequence were selected. There were two 3D segmentation methods for the lesions, one was manual segmentation along the edge of the lesion slice by slice, and the other was the minimum bounding cube of the lesion. Then, DL feature extraction was carried out; the pixel values of the image data are normalized to 0-1 range. The model was established based on the blueprint of the classic residual network ResNet50, retaining its residual module and improved 2D convolution module to 3D. At the same time, an attention mechanism was added to transform the attention mechanism module, which only fit the 2D image convolution module, into a 3D-Convolutional Block Attention Module (CBAM) to adapt to 3D-MRI. After the last CBAM, the algorithm stretches the output high-dimensional features into a one-dimensional vector and connects 2 fully connected slices, before finally setting two output results (P1, P2), which, respectively, represent the probability of benign and malignant lesions. Accuracy, sensitivity, specificity, negative predictive value, positive predictive value, the recall rate and area under the ROC curve (AUC) were used as evaluation indicators. A total of 203 patients were enrolled, with 207 mass lesions including 101 benign lesions and 106 malignant lesions. The data set was divided into the training set ( n = 145 ), the validation set ( n = 22 ), and the test set ( n = 40 ) at the ratio of 7 : 1 : 2; fivefold cross-validation was performed. The mean AUC based on the minimum bounding cube of lesion and the 3D-ROI of lesion itself were 0.827 and 0.799, the accuracy was 78.54% and 74.63%, the sensitivity was 78.85% and 83.65%, the specificity was 78.22% and 65.35%, the NPV was 78.85% and 71.31%, the PPV was 78.22% and 79.52%, the recall rate was 78.85% and 83.65%, respectively. There was no statistical difference in AUC based on the lesion itself model and the minimum bounding cube model ( Z = 0.771 , p = 0.4408 ). The minimum bounding cube based on the edge of the lesion showed higher accuracy, specificity, and lower recall rate in identifying benign and malignant lesions. Based on the lesion 3D-ROI segmentation using a minimum bounding cube can more effectively reflect the information of the lesion itself and the surrounding tissues. Its DL model performs better than the lesion itself. Using the DL approach with a 3D attention mechanism based on ResNet50 to identify benign and malignant BI-RADS 4 lesions was feasible.

Author(s):  
Mohamed Zidan ◽  
Shimaa Ali Saad ◽  
Eman Abo Elhamd ◽  
Hosam Eldin Galal ◽  
Reem Elkady

Abstract Background Asymmetric breast density is a potentially perplexing finding; it may be due to normal hormonal variation of the parenchymal pattern and summation artifact or it may indicate an underlying true pathology. The current study aimed to identify the role of diffusion-weighted imaging (DWI) and the apparent diffusion coefficient (ADC) values in the assessment of breast asymmetries. Results Fifty breast lesions were detected corresponding to the mammographic asymmetry. There were 35 (70%) benign lesions and 15 (30%) malignant lesions. The mean ADC value was 1.59 ± 0.4 × 10–3 mm2/s for benign lesions and 0.82 ± 0.3 × 10–3 mm2/s for malignant lesions. The ADC cutoff value to differentiate between benign and malignant lesions was 1.10 × 10–3 mm2/s with sensitivity 80%, specificity 88.6%, positive predictive value 75%, negative predictive value 91%, and accuracy 86%. Best results were achieved by implementation of the combined DCE-MRI and DWI protocol, with sensitivity 93.3%, specificity 94.3%, positive predictive value 87.5%, negative predictive value 97.1%, and accuracy 94%. Conclusion Dynamic contrast-enhanced MRI (DCE-MRI) was the most sensitive method for the detection of the underlying malignant pathology of breast asymmetries. However, it provided a limited specificity that may cause improper final BIRADS classification and may increase the unnecessary invasive procedures. DWI was used as an adjunctive method to DCE-MRI that maintained high sensitivity and increased specificity and the overall diagnostic accuracy of breast MRI examination. Best results can be achieved by the combined protocol of DCE-MRI and DWI.


Heart ◽  
2018 ◽  
Vol 104 (23) ◽  
pp. 1921-1928 ◽  
Author(s):  
Ming-Zher Poh ◽  
Yukkee Cheung Poh ◽  
Pak-Hei Chan ◽  
Chun-Ka Wong ◽  
Louise Pun ◽  
...  

ObjectiveTo evaluate the diagnostic performance of a deep learning system for automated detection of atrial fibrillation (AF) in photoplethysmographic (PPG) pulse waveforms.MethodsWe trained a deep convolutional neural network (DCNN) to detect AF in 17 s PPG waveforms using a training data set of 149 048 PPG waveforms constructed from several publicly available PPG databases. The DCNN was validated using an independent test data set of 3039 smartphone-acquired PPG waveforms from adults at high risk of AF at a general outpatient clinic against ECG tracings reviewed by two cardiologists. Six established AF detectors based on handcrafted features were evaluated on the same test data set for performance comparison.ResultsIn the validation data set (3039 PPG waveforms) consisting of three sequential PPG waveforms from 1013 participants (mean (SD) age, 68.4 (12.2) years; 46.8% men), the prevalence of AF was 2.8%. The area under the receiver operating characteristic curve (AUC) of the DCNN for AF detection was 0.997 (95% CI 0.996 to 0.999) and was significantly higher than all the other AF detectors (AUC range: 0.924–0.985). The sensitivity of the DCNN was 95.2% (95% CI 88.3% to 98.7%), specificity was 99.0% (95% CI 98.6% to 99.3%), positive predictive value (PPV) was 72.7% (95% CI 65.1% to 79.3%) and negative predictive value (NPV) was 99.9% (95% CI 99.7% to 100%) using a single 17 s PPG waveform. Using the three sequential PPG waveforms in combination (<1 min in total), the sensitivity was 100.0% (95% CI 87.7% to 100%), specificity was 99.6% (95% CI 99.0% to 99.9%), PPV was 87.5% (95% CI 72.5% to 94.9%) and NPV was 100% (95% CI 99.4% to 100%).ConclusionsIn this evaluation of PPG waveforms from adults screened for AF in a real-world primary care setting, the DCNN had high sensitivity, specificity, PPV and NPV for detecting AF, outperforming other state-of-the-art methods based on handcrafted features.


ISRN Oncology ◽  
2011 ◽  
Vol 2011 ◽  
pp. 1-7 ◽  
Author(s):  
Valeria Fiaschetti ◽  
Chiara Adriana Pistolese ◽  
Tommaso Perretta ◽  
Elsa Cossu ◽  
Chiara Arganini ◽  
...  

Purpose. To evaluate the correlation between MRI and histopathological findings in patients with mammographically detected 3–5 BI-RAD (Breast Imaging Reporting And Data Systems) microcalcifications and to allow a better surgical planning. Materials and Method. 62 female Patients (age ) with screening detected 3–5 BI-RAD microcalcifications underwent dynamic 3 T contrast-enhanced breast MRI. After 30-day (range 24–36 days) period, 55 Patients underwent biopsy using stereotactic vacuum-assisted biopsy (VAB), 5 Patients underwent stereotactic mammographically guided biopsy, and 2 Patients underwent MRI-guided VAB. Results. Microhistology examination demonstrated 36 malignant lesions and 26 benign lesions. The analysis of MRI findings identified 8 cases of MRI BI-RADS 5, 23 cases of MRI BI-RADS 4, 11 cases of MRI BI-RADS 3, 4 cases type A and 7 cases type B, and 20 cases of MRI BI-RADS 1-2. MRI sensitivity, specificity, positive predictive value, and negative predictive value were 88.8%, 76.9%, 84.2%, and 83.3%, respectively.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 8072
Author(s):  
Yu-Bang Chang ◽  
Chieh Tsai ◽  
Chang-Hong Lin ◽  
Poki Chen

As the techniques of autonomous driving become increasingly valued and universal, real-time semantic segmentation has become very popular and challenging in the field of deep learning and computer vision in recent years. However, in order to apply the deep learning model to edge devices accompanying sensors on vehicles, we need to design a structure that has the best trade-off between accuracy and inference time. In previous works, several methods sacrificed accuracy to obtain a faster inference time, while others aimed to find the best accuracy under the condition of real time. Nevertheless, the accuracies of previous real-time semantic segmentation methods still have a large gap compared to general semantic segmentation methods. As a result, we propose a network architecture based on a dual encoder and a self-attention mechanism. Compared with preceding works, we achieved a 78.6% mIoU with a speed of 39.4 FPS with a 1024 × 2048 resolution on a Cityscapes test submission.


2021 ◽  
Author(s):  
Jiajia Cao ◽  
Qin Zhou ◽  
Yi Chen ◽  
Lin Yin ◽  
Fei Zhang

The segmentation of the retinal vascular tree is the fundamental step for diagnosing ophthalmological diseases and cardiovascular diseases. Most existing vessel segmentation methods based on deep learning give the learned features equal importance. Ignored the highly imbalanced ratio between background and vessels (the majority of vessel pixels belong to the background), the learned features would be dominantly guided by background, and relatively little influence comes from vessels, often leading to low model sensitivity and prediction accuracy. The reduction of model size is also a challenge. We propose a mixed attention mechanism and asymmetric convolution encoder-decoder structure(MAAC) for segmentation in Retinal Vessels to solve these problems. In MAAC, the mixed attention is designed to emphasize the valid features and suppress the invalid features. It not only identifies information that helps retinal vessels recognition but also locates the position of the vessel. All square convolutions are replaced by asymmetric convolutions because it is more robust to rotational distortions and small convolutions are more suitable for extracting vessel features (based on the thin characteristics of vessels). The employment of asymmetric convolution reduces model parameters and improve the recognition of thin vessel. The experiments on public datasets DRIVE, STARE, and CHASE\_DB1 demonstrated that the proposed MAAC could more accurately segment vessels with a global AUC of 98.17$\%$, 98.67$\%$, and 98.53$\%$, respectively. The mixed attention proposed in this study can be applied to other deep learning models for performance improvement without changing the network architectures. <br>


2020 ◽  
Vol 12 (3) ◽  
pp. 441
Author(s):  
Lifu Chen ◽  
Ting Weng ◽  
Jin Xing ◽  
Zhouhao Pan ◽  
Zhihui Yuan ◽  
...  

Bridge detection from Synthetic Aperture Radar (SAR) images has very important strategic significance and practical value, but there are still many challenges in end-to-end bridge detection. In this paper, a new deep learning-based network is proposed to identify bridges from SAR images, namely, multi-resolution attention and balance network (MABN). It mainly includes three parts, the attention and balanced feature pyramid (ABFP) network, the region proposal network (RPN), and the classification and regression. First, the ABFP network extracts various features from SAR images, which integrates the ResNeXt backbone network, balanced feature pyramid, and the attention mechanism. Second, extracted features are used by RPN to generate candidate boxes of different resolutions and fused. Furthermore, the candidate boxes are combined with the features extracted by the ABFP network through the region of interest (ROI) pooling strategy. Finally, the detection results of the bridges are produced by the classification and regression module. In addition, intersection over union (IOU) balanced sampling and balanced L1 loss functions are introduced for optimal training of the classification and regression network. In the experiment, TerraSAR data with 3-m resolution and Gaofen-3 data with 1-m resolution are used, and the results are compared with faster R-CNN and SSD. The proposed network has achieved the highest detection precision (P) and average precision (AP) among the three networks, as 0.877 and 0.896, respectively, with the recall rate (RR) as 0.917. Compared with the other two networks, the false alarm targets and missed targets of the proposed network in this paper are greatly reduced, so the precision is greatly improved.


Author(s):  
Reham Khalil ◽  
Noha Mohamed Osman ◽  
Nivine Chalabi ◽  
Enas Abdel Ghany

Abstract Background We aimed to evaluate the unenhanced MRI of the breast (UE-MRI) as an effective substitute for dynamic contrast-enhanced breast MRI (DCE-MRI) in both detecting and characterizing breast lesions. We enrolled in our retrospective study 125 females (232 breasts, as 18 patients had unilateral mastectomy) with breast mass at MRI of variable pathologies. Routine DCE-MRI protocol of the breast was conducted. We compared the conventional unenhanced images including STIR, T2, and DWIs to the DCE-MRI by two blinded radiologists, to detect and characterize breast lesions, and then we compared their results with the final reference diagnoses supplied by the histopathology or serial negative follow-ups. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and diagnostic accuracy for UE-MRI and DCE-MRI were calculated. UE-MRI results of each observer were also compared with DCE- MRI. Results The calculated UE-MRI sensitivity, specificity, positive predictive value, negative predictive value, and diagnostic accuracy for the first observer were 95%, 80%, 83%, 94%, and 89% respectively, and for the second observer, they were 94%, 79%, 81%, 93%, and 86%. On the other hand, those for the DCE-MRI by the first observer were 98%, 82%, 84%, 98%, and 90% and were 97%, 81%, 84%, 97%, and 89% by the second observer. The intraobserver agreement between the UE-MRI and DCE-MRI results of each observer was 94% and 95%, while the interobserver agreement for each section was 97.4% for UE-MRI and 98.3% for DCE-MRI. Conclusion UE-MRI of the breast can be a reliable and effective substitute for breast DCE-MRI. It can be used with comparable accuracy to DCE-MRI whenever contrast administration is not feasible or contraindicated.


2021 ◽  
Vol 13 (3) ◽  
pp. 1224
Author(s):  
Xiangbin Liu ◽  
Liping Song ◽  
Shuai Liu ◽  
Yudong Zhang

As an emerging biomedical image processing technology, medical image segmentation has made great contributions to sustainable medical care. Now it has become an important research direction in the field of computer vision. With the rapid development of deep learning, medical image processing based on deep convolutional neural networks has become a research hotspot. This paper focuses on the research of medical image segmentation based on deep learning. First, the basic ideas and characteristics of medical image segmentation based on deep learning are introduced. By explaining its research status and summarizing the three main methods of medical image segmentation and their own limitations, the future development direction is expanded. Based on the discussion of different pathological tissues and organs, the specificity between them and their classic segmentation algorithms are summarized. Despite the great achievements of medical image segmentation in recent years, medical image segmentation based on deep learning has still encountered difficulties in research. For example, the segmentation accuracy is not high, the number of medical images in the data set is small and the resolution is low. The inaccurate segmentation results are unable to meet the actual clinical requirements. Aiming at the above problems, a comprehensive review of current medical image segmentation methods based on deep learning is provided to help researchers solve existing problems.


2018 ◽  
Vol 2018 ◽  
pp. 1-15
Author(s):  
Chuin-Mu Wang ◽  
Chieh-Ling Huang ◽  
Sheng-Chih Yang

Three-dimensional (3D) medical image segmentation is used to segment the target (a lesion or an organ) in 3D medical images. Through this process, 3D target information is obtained; hence, this technology is an important auxiliary tool for medical diagnosis. Although some methods have proved to be successful for two-dimensional (2D) image segmentation, their direct use in the 3D case has been unsatisfactory. To obtain more precise tumor segmentation results from 3D MR images, in this paper, we propose a method known as the 3D shape-weighted level set method (3D-SLSM). The proposed method first converts the LSM, which is superior with respect to 2D image segmentation, into a 3D algorithm that is suitable for overall calculations in 3D image models, and which improves the efficiency and accuracy of calculations. A 3D shape-weighted value is then added for each 3D-SLSM iterative process according to the changes in volume. Besides increasing the convergence rate and eliminating background noise, this shape-weighted value also brings the segmented contour closer to the actual tumor margins. To perform a quantitative analysis of 3D-SLSM and to examine its feasibility in clinical applications, we have divided our experiments into computer-simulated sequence images and actual breast MRI cases. Subsequently, we simultaneously compared various existing 3D segmentation methods. The experimental results demonstrated that 3D-SLSM exhibited precise segmentation results for both types of experimental images. In addition, 3D-SLSM showed better results for quantitative data compared with existing 3D segmentation methods.


Author(s):  
Ishtiaque Ahmed ◽  
◽  
Manan Darda ◽  
Neha Tikyani ◽  
Rachit Agrawal ◽  
...  

The COVID-19 pandemic has caused large-scale outbreaks in more than 150 countries worldwide, causing massive damage to the livelihood of many people. The capacity to identify contaminated patients early and get unique treatment is quite possibly the primary stride in the battle against COVID-19. One of the quickest ways to diagnose patients is to use radiography and radiology images to detect the disease. Early studies have shown that chest X-rays of patients infected with COVID-19 have unique abnormalities. To identify COVID-19 patients from chest X-ray images, we used various deep learning models based on previous studies. We first compiled a data set of 2,815 chest radiographs from public sources. The model produces reliable and stable results with an accuracy of 91.6%, a Positive Predictive Value of 80%, a Negative Predictive Value of 100%, specificity of 87.50%, and Sensitivity of 100%. It is observed that the CNN-based architecture can diagnose COVID19 disease. The parameters’ outcomes can be further improved by increasing the dataset size and by developing the CNN-based architecture for training the model.


Sign in / Sign up

Export Citation Format

Share Document