scholarly journals CSGBBNet: An Explainable Deep Learning Framework for COVID-19 Detection

Diagnostics ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. 1712
Author(s):  
Xu-Jing Yao ◽  
Zi-Quan Zhu ◽  
Shui-Hua Wang ◽  
Yu-Dong Zhang

The COVID-19 virus has swept the world and brought great impact to various fields, gaining wide attention from all walks of life since the end of 2019. At present, although the global epidemic situation is leveling off and vaccine doses have been administered in a large amount, confirmed cases are still emerging around the world. To make up for the missed diagnosis caused by the uncertainty of nucleic acid polymerase chain reaction (PCR) test, utilizing lung CT examination as a combined detection method to improve the diagnostic rate becomes a necessity. Our research considered the time-consuming and labor-intensive characteristics of the traditional CT analyzing process, and developed an efficient deep learning framework named CSGBBNet to solve the binary classification task of COVID-19 images based on a COVID-Seg model for image preprocessing and a GBBNet for classification. The five runs with random seed on the test set showed our novel framework can rapidly analyze CT scan images and give out effective results for assisting COVID-19 detection, with the mean accuracy of 98.49 ± 1.23%, the sensitivity of 99.00 ± 2.00%, the specificity of 97.95 ± 2.51%, the precision of 98.10 ± 2.61%, and the F1 score of 98.51 ± 1.22%. Moreover, our model CSGBBNet performs better when compared with seven previous state-of-the-art methods. In this research, the aim is to link together biomedical research and artificial intelligence and provide some insights into the field of COVID-19 detection.

Author(s):  
R Dhaya

The World Health Organization (WHO) considers the COVID-19 Coronavirus to be a global pandemic. The most effective form of protection is to wear a face mask in public places. Moreover, the COVID-19 pandemic prompted all the countries to set up a lockdown to prevent viral transmission. According to a survey study, the use of facemasks at work decreases the chances of fast transmission. If the facemasks are not used or are worn incorrectly, it contributes to the third and fourth waves of the corona virus spreading throughout the world. This motivates us to conduct an efficient investigation of the face mask identification system and monitor people, who use suitable face mask in public places. Deep learning is the most effective approach for detecting whether or not a person is wearing a face mask in a crowded area. Using a multiclass deep learning technique, this research study proposes an efficient two stage identification (ETSI) for face mask detection. Whereas, the binary classification does not offer information about face mask detection and error. The proposed approach employs CNN's "ReLU" activation function to detect the face mask. Furthermore, in the current pandemic crisis, this research article offers a very efficient and precise approach for identifying COVID-19. Precision has increased as a result of the employment of a multi-class abbreviation in the final output.


2018 ◽  
Vol 19 (2) ◽  
pp. 393-408 ◽  
Author(s):  
Yumeng Tao ◽  
Kuolin Hsu ◽  
Alexander Ihler ◽  
Xiaogang Gao ◽  
Soroosh Sorooshian

Abstract Compared to ground precipitation measurements, satellite-based precipitation estimation products have the advantage of global coverage and high spatiotemporal resolutions. However, the accuracy of satellite-based precipitation products is still insufficient to serve many weather, climate, and hydrologic applications at high resolutions. In this paper, the authors develop a state-of-the-art deep learning framework for precipitation estimation using bispectral satellite information, infrared (IR), and water vapor (WV) channels. Specifically, a two-stage framework for precipitation estimation from bispectral information is designed, consisting of an initial rain/no-rain (R/NR) binary classification, followed by a second stage estimating the nonzero precipitation amount. In the first stage, the model aims to eliminate the large fraction of NR pixels and to delineate precipitation regions precisely. In the second stage, the model aims to estimate the pointwise precipitation amount accurately while preserving its heavily skewed distribution. Stacked denoising autoencoders (SDAEs), a commonly used deep learning method, are applied in both stages. Performance is evaluated along a number of common performance measures, including both R/NR and real-valued precipitation accuracy, and compared with an operational product, Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks–Cloud Classification System (PERSIANN-CCS). For R/NR binary classification, the proposed two-stage model outperforms PERSIANN-CCS by 32.56% in the critical success index (CSI). For real-valued precipitation estimation, the two-stage model is 23.40% lower in average bias, is 44.52% lower in average mean squared error, and has a 27.21% higher correlation coefficient. Hence, the two-stage deep learning framework has the potential to serve as a more accurate and more reliable satellite-based precipitation estimation product. The authors also provide some future directions for development of satellite-based precipitation estimation products in both incorporating auxiliary information and improving retrieval algorithms.


2020 ◽  
Vol 9 (5) ◽  
pp. 23-34
Author(s):  
Purva Singh

On 11th March 2020, the World Health Organization (WHO) declared Corona Virus Disease of 2019 (COVID-19) as a pandemic. Over time, the exponential growth of this disease has highlighted a mixture of sentiments expressed by the general population from various parts of the world speaking varied languages. It is, therefore, essential to analyze the public sentiment during this wave of the pandemic. While much work prevails to determine the sentiment polarity for tweets related to COVID-19, expressed in the English language, we still need to work on public sentiments expressed in languages other than English. This paper proposes a framework, Covhindia, a deep-learning framework that performs sentiment polarity detection of tweets related to COVID-19 posted in the Hindi language on the Twitter platform. The proposed framework leverages machine translation on Hindi tweets and passes the translated data as input to a deep learning model which is trained on an English corpus of COVID-19 tweets posted from India [18]. The paper compares nine deep learning models' performances in classifying the sentiment polarity on an English dataset. Performance comparison of these architectures reveals that the BERT model had the best polarity detection accuracy on the English corpus. As part of testing the Covhindia’s accuracy in performing sentiment classification on Hindi tweets, the paper employs a separate dataset developed using a python library called Tweepy to extract Hindi tweets related to COVID-19. Experimental results reveal that Covhindia achieved state-of-the-art accuracy in classifying COVID-19 tweets posted in the Hindi language. The use of open-source machine translation tools paved the way for leveraging Covhindia for performing multilingual sentiment classification on COVID-19 tweets. For the benefit of the research community, the code and Jupyter Notebooks related to this paper are available on Github


2020 ◽  
Vol 34 (6) ◽  
pp. 673-682
Author(s):  
Ashish Tripathi ◽  
Arush Jain ◽  
Krishna K. Mishra ◽  
Anand Bhushan Pandey ◽  
Prem Chand Vashist

Due to the rapidly spreading nature of coronavirus, a pandemic situation has emerged around the world. It is affecting society at large that includes the global economy and public health too. It was found in recent studies that the novel and unknown nature of this virus makes it more difficult to identify and treat the affected patient in the early stage. In this context, a time-consuming method named reverse transcription-polymerase chain reaction (RT-PCR) is being used to detect the positive cases of COVID-19, which requires blood samples of the suspects to diagnose the disease. This paper presents a new deep learning-based method to detect COVID-19 cases using chest X-ray images as the recent studies show that the radiology images have relevant features that can be used to predict the COVID-19. The proposed method is developed for binary classification to identify that a person is infected with COVID-19 or not. A total of 2400 X-ray images are taken for the experimental work. It includes 1000, COVID-19, and 1000, non-COVID-19 images, 200, COVID-19, and 200, non-COVID-19 testing images. The proposed method has been compared with the existing state-of-the-art methods on various statistical parameters which give better results with higher accuracy in diagnosing the COVID-19 cases. The proposed method has obtained 98.25% accuracy, 98.49% precision, 98% sensitivity, 98.50% specificity, and 98.25% F1 score.


2019 ◽  
Author(s):  
Xianggen Liu ◽  
Pengyong Li ◽  
Sen Song

AbstractChemical retrosynthesis has been a crucial and challenging task in organic chemistry for several decades. In early years, retrosynthesis is accomplished by the disconnection approach which is labor-intensive and requires expert knowledge. Afterward, rule-based methods have dominated in retrosynthesis for years. In this study, we revisit the disconnection approach by leveraging deep learning (DL) to boost its performance and increase the explainability of DL. Concretely, we propose a novel graph-based deep-learning framework, named DeRetro, to predict the set of reactants for a target product by executing the process of disconnection and reactant generation orderly. Experimental results report that DeRetro achieves new state-of-the-art performance in predicting the reactants. In-depth analyses also demonstrate that even without the reaction type as input, DeRetro retains its retrosynthesis performance while other methods show a significant decrease, resulting in a large margin of 19% between DeRetro and previous state-of-the-art rule-based method. These results have established DeRetro as a powerful and useful computational tool in solving the challenging problem of retrosynthetic analysis.


2021 ◽  
Author(s):  
Rohan Bhansali ◽  
Rahul Kumar

AbstractBurns are the fourth most prevalent unintentional injury around the world, and when left untreated can become permanent and sometimes fatal. An important aspect of treating burn injuries is accurate and efficient diagnosis. Classifying the three primary types of burns – superficial dermal, deep dermal, and full thickness – is essential in determining the necessity of surgery, which is often critical to the afflicted patient’s survival. Unfortunately, reconstructive burn surgeons and dermatologists are merely able to diagnose these types of burns with approximately 50-75% accuracy. As a result, we propose the use of an eight-layer convolutional neural network, BurnNet, for rapid and precise burn classification with 99.87% accuracy. We applied affine transformations to artificially augment our dataset and found that our model attained near perfect metrics across the board, demonstrating the high propensity of deep learning architectures in burn classification.


2021 ◽  
Author(s):  
Mohanad Alkhodari ◽  
Ahsan H. Khandoker

AbstractThis study was sought to investigate the feasibility of using smartphone-based breathing sounds within a deep learning framework to discriminate between COVID-19, including asymptomatic, and healthy subjects. A total of 480 breathing sounds (240 shallow and 240 deep) were obtained from a publicly available database named Coswara. These sounds were recorded by 120 COVID-19 and 120 healthy subjects via a smartphone microphone through a website application. A deep learning framework was proposed herein the relies on hand-crafted features extracted from the original recordings and from the mel-frequency cepstral coefficients (MFCC) as well as deep-activated features learned by a combination of convolutional neural network and bi-directional long short-term memory units (CNN-BiLSTM). Analysis of the normal distribution of the combined MFCC values showed that COVID-19 subjects tended to have a distribution that is skewed more towards the right side of the zero mean (shallow: 0.59±1.74, deep: 0.65±4.35). In addition, the proposed deep learning approach had an overall discrimination accuracy of 94.58% and 92.08% using shallow and deep recordings, respectively. Furthermore, it detected COVID-19 subjects successfully with a maximum sensitivity of 94.21%, specificity of 94.96%, and area under the receiver operating characteristic (AUROC) curves of 0.90. Among the 120 COVID-19 participants, asymptomatic subjects (18 subjects) were successfully detected with 100.00% accuracy using shallow recordings and 88.89% using deep recordings. This study paves the way towards utilizing smartphone-based breathing sounds for the purpose of COVID-19 detection. The observations found in this study were promising to suggest deep learning and smartphone-based breathing sounds as an effective pre-screening tool for COVID-19 alongside the current reverse-transcription polymerase chain reaction (RT-PCR) assay. It can be considered as an early, rapid, easily distributed, time-efficient, and almost no-cost diagnosis technique complying with social distancing restrictions during COVID-19 pandemic.


PLoS ONE ◽  
2022 ◽  
Vol 17 (1) ◽  
pp. e0262448
Author(s):  
Mohanad Alkhodari ◽  
Ahsan H. Khandoker

This study was sought to investigate the feasibility of using smartphone-based breathing sounds within a deep learning framework to discriminate between COVID-19, including asymptomatic, and healthy subjects. A total of 480 breathing sounds (240 shallow and 240 deep) were obtained from a publicly available database named Coswara. These sounds were recorded by 120 COVID-19 and 120 healthy subjects via a smartphone microphone through a website application. A deep learning framework was proposed herein that relies on hand-crafted features extracted from the original recordings and from the mel-frequency cepstral coefficients (MFCC) as well as deep-activated features learned by a combination of convolutional neural network and bi-directional long short-term memory units (CNN-BiLSTM). The statistical analysis of patient profiles has shown a significant difference (p-value: 0.041) for ischemic heart disease between COVID-19 and healthy subjects. The Analysis of the normal distribution of the combined MFCC values showed that COVID-19 subjects tended to have a distribution that is skewed more towards the right side of the zero mean (shallow: 0.59±1.74, deep: 0.65±4.35, p-value: <0.001). In addition, the proposed deep learning approach had an overall discrimination accuracy of 94.58% and 92.08% using shallow and deep recordings, respectively. Furthermore, it detected COVID-19 subjects successfully with a maximum sensitivity of 94.21%, specificity of 94.96%, and area under the receiver operating characteristic (AUROC) curves of 0.90. Among the 120 COVID-19 participants, asymptomatic subjects (18 subjects) were successfully detected with 100.00% accuracy using shallow recordings and 88.89% using deep recordings. This study paves the way towards utilizing smartphone-based breathing sounds for the purpose of COVID-19 detection. The observations found in this study were promising to suggest deep learning and smartphone-based breathing sounds as an effective pre-screening tool for COVID-19 alongside the current reverse-transcription polymerase chain reaction (RT-PCR) assay. It can be considered as an early, rapid, easily distributed, time-efficient, and almost no-cost diagnosis technique complying with social distancing restrictions during COVID-19 pandemic.


2019 ◽  
Vol 37 (15_suppl) ◽  
pp. 3140-3140
Author(s):  
Eldad Klaiman ◽  
Jacob Gildenblat ◽  
Ido Ben-Shaul ◽  
Astrid Heller ◽  
Konstanty Korski ◽  
...  

3140 Background: Recently, histological pattern signatures obtained from diagnostic H&E images have been found to predict mutation, biomarker status or outcome. We report here on a novel deep learning based framework designed to identify and extract predictive histological signatures. We have applied this framework in 3 experiments, predicting specifically the microsatellite status (MSS) of colorectal cancer (CRC), breast cancer (BC) micrometastasis in Lymph nodes (LN) and Pathologic Complete Response (pCR) in BC diagnostic biopsies. Methods: Our deep learning based algorithm was trained on histology images at 20X magnification. Algorithms were trained for binary classification for each of the three cohorts. We used 75% of the images for training and test our algorithm on the remaining 25% of the images. Cohort details are as follows: MSS for CRC: 94 patients’ H&E stained tissue images from the Roche internal CRC80 dataset (MSS n =24; MSI n = 70) were used. BC LN: 270 patients’ H&E stained tissue images from the CAMELYON16 dataset ( LN(+) n = 110 ; LN(-), n =160) were used. pCR for BC: 225 patients’ H&E stained tissue images from the Tryphaena Study BO22280, neoadjuvant, Trastuzumab/Pertuzumab chemotherapy combination trial. (pCR=111, non-pCR n=114). Results: We report and assess algorithm performance on each of the cohorts by Area Under the Curve (AUC). Prediction of MSS in the CRC80 status yielded AUC 0.9. Prediction of LN invasion on CAMELYON16 dataset yielded AUC 0.85. Prediction of pCR on the Tryphaena cohort yielded an AUC of 0.8. Conclusions: We present a new approach to generate predictive signatures based on conventional diagnostic H&E images and a novel machine learning framework. The CRC80 and CAMELYON16 cohorts served as a confidence building experiments with predictive features well known by clinicians and visually confirmed. The predictive algorithm for pCR in the Tryphaena cohort yielded both response prediction and the high predictive value FOVs. These included tissue patterns which have not until now been considered to influence on the prediction of pCR.


Sign in / Sign up

Export Citation Format

Share Document