scholarly journals TickPhone App: A Smartphone Application for Rapid Tick Identification Using Deep Learning

2021 ◽  
Vol 11 (16) ◽  
pp. 7355
Author(s):  
Zhiheng Xu ◽  
Xiong Ding ◽  
Kun Yin ◽  
Ziyue Li ◽  
Joan A. Smyth ◽  
...  

Tick species are considered the second leading vector of human diseases. Different ticks can transmit a variety of pathogens that cause various tick-borne diseases (TBD), such as Lyme disease. Currently, it remains a challenge to diagnose Lyme disease because of its non-specific symptoms. Rapid and accurate identification of tick species plays an important role in predicting potential disease risk for tick-bitten patients, and ensuring timely and effective treatment. Here, we developed, optimized, and tested a smartphone-based deep learning algorithm (termed “TickPhone app”) for tick identification. The deep learning model was trained by more than 2000 tick images and optimized by different parameters, including normal sizes of images, deep learning architectures, image styles, and training–testing dataset distributions. The optimized deep learning model achieved a training accuracy of ~90% and a validation accuracy of ~85%. The TickPhone app was used to identify 31 independent tick species and achieved an accuracy of 95.69%. Such a simple and easy-to-use TickPhone app showed great potential to estimate epidemiology and risk of tick-borne disease, help health care providers better predict potential disease risk for tick-bitten patients, and ultimately enable timely and effective medical treatment for patients.

2021 ◽  
Author(s):  
Jae-Seung Yun ◽  
Jaesik Kim ◽  
Sang-Hyuk Jung ◽  
Seon-Ah Cha ◽  
Seung-Hyun Ko ◽  
...  

Objective: We aimed to develop and evaluate a non-invasive deep learning algorithm for screening type 2 diabetes in UK Biobank participants using retinal images. Research Design and Methods: The deep learning model for prediction of type 2 diabetes was trained on retinal images from 50,077 UK Biobank participants and tested on 12,185 participants. We evaluated its performance in terms of predicting traditional risk factors (TRFs) and genetic risk for diabetes. Next, we compared the performance of three models in predicting type 2 diabetes using 1) an image-only deep learning algorithm, 2) TRFs, 3) the combination of the algorithm and TRFs. Assessing net reclassification improvement (NRI) allowed quantification of the improvement afforded by adding the algorithm to the TRF model. Results: When predicting TRFs with the deep learning algorithm, the areas under the curve (AUCs) obtained with the validation set for age, sex, and HbA1c status were 0.931 (0.928-0.934), 0.933 (0.929-0.936), and 0.734 (0.715-0.752), respectively. When predicting type 2 diabetes, the AUC of the composite logistic model using non-invasive TRFs was 0.810 (0.790-0.830), and that for the deep learning model using only fundus images was 0.731 (0.707-0.756). Upon addition of TRFs to the deep learning algorithm, discriminative performance was improved to 0.844 (0.826-0.861). The addition of the algorithm to the TRFs model improved risk stratification with an overall NRI of 50.8%. Conclusions: Our results demonstrate that this deep learning algorithm can be a useful tool for stratifying individuals at high risk of type 2 diabetes in the general population.


Author(s):  
Amit Doegar ◽  
◽  
Maitreyee Dutta ◽  
Gaurav Kumar ◽  
◽  
...  

In the present scenario, one of the threats of trust on images for digital and online applications as well as on social media. Individual’s reputation can be turnish using misinformation or manipulation in the digital images. Image forgery detection is an approach for detection and localization of forged components in the image which is manipulated. For effective image forgery detection, an adequate number of features are required which can be accomplished by a deep learning model, which does not require manual feature engineering or handcraft feature approaches. In this paper we have implemented GoogleNet deep learning model to extract the image features and employ Random Forest machine learning algorithm to detect whether the image is forged or not. The proposed approach is implemented on the publicly available benchmark dataset MICC-F220 with k-fold cross validation approach to split the dataset into training and testing dataset and also compared with the state-of-the-art approaches.


2018 ◽  
Vol 36 (4_suppl) ◽  
pp. 266-266
Author(s):  
Sunyoung S. Lee ◽  
Jin Cheon Kim ◽  
Jillian Dolan ◽  
Andrew Baird

266 Background: The characteristic histological feature of pancreatic adenocarcinoma (PAD) is extensive desmoplasia alongside leukocytes and cancer-associated fibroblasts. Desmoplasia is a known barrier to the absorption and penetration of therapeutic drugs. Stromal cells are key elements for a clinical response to chemotherapy and immunotherapy, but few models exist to analyze the spatial and architectural elements that compose the complex tumor microenvironment in PAD. Methods: We created a deep learning algorithm to analyze images and quantify cells and fibrotic tissue. Histopathology slides of PAD patients (pts) were then used to automate the recognition and mapping of adenocarcinoma cells, leukocytes, fibroblasts, and degree of desmoplasia, defined as the ratio of the area of fibrosis to that of the tumor gland. This information was correlated with mutational burden, defined as mutations (mts) per megabase (mb) of each pt. Results: The histopathology slides (H&E stain) of 126 pts were obtained from The Cancer Genome Atlas (TCGA) and analyzed with the deep learning model. Pt with the largest mutational burden (733 mts/mb, n = 1 pt) showed the largest number of leukocytes (585/mm2). Those with the smallest mutational burden (0 mts/mb, n = 16 pts) showed the fewest leukocytes (median, 14/mm2). Mutational burden was linearly proportional to the number of leukocytes (R2 of 0.7772). The pt with a mutational burden of 733 was excluded as an outlier. No statistically significant difference in the number of fibroblasts, degree of desmoplasia, or thickness of the first fibrotic layer (the smooth muscle actin-rich layer outside of the tumor gland), was found among pts of varying mutational burden. The median distance from a tumor gland to a leukocyte was inversely proportional to the number of leukocytes in a box of 1 mm2 with a tumor gland at the center. Conclusions: A deep learning model enabled automated quantification and mapping of desmoplasia, stromal and malignant cells, revealing the spatial and architectural relationship of these cells in PAD pts with varying mutational burdens. Further biomarker driven studies in the context of immunotherapy and anti-fibrosis are warranted.


2019 ◽  
Vol 62 (1) ◽  
pp. 20-27 ◽  
Author(s):  
Kaitlyn Irving ◽  
Lindsay Galway

Climate change has allowed for the expansion and intensification of blacklegged ticks; the vector of Lyme disease. Projections estimate that by the year 2049 all health units in Ontario will have suitable environmental conditions for the establishment of this vector. A review of website content from health units in Ontario was performed to assess the quality of tick and Lyme disease information provided to the public and health care providers. Websites were evaluated based on criteria such as the provision of Lyme disease information (i.e., transmission, symptoms, treatment, etc.), the inclusion of misleading or incorrect information, and visuals provided. The quality of textual and visual information varied substantially across the 35 health units analyzed. Eleven health units were found to provide misleading or incorrect information. Disparities were found between areas with current Lyme disease risk and those without. The majority of health units did not include satisfactory visual content pertaining to ticks. Given the expected expansion and intensification of blacklegged tick populations across the province, all health units must ensure the information communicated to the public about ticks and Lyme disease is of high-quality and consistent. We conclude with specific recommendations to improve the textual and visual content of websites.


2020 ◽  
Vol 4 (Supplement_1) ◽  
Author(s):  
Indriani Astono ◽  
Christopher W Rowe ◽  
James Welsh ◽  
Phillip Jobling

Abstract Introduction: Nerves in the cancer microenvironment have prognostic significance, and nerve-cancer crosstalk may contribute to tumour progression, but the role of nerves in thyroid cancer is not known (1). Reproducible techniques to quantify innervation are lacking, with reliance on manual counting or basic single-parameter digital quantification. Aims: To determine if a deep machine learning algorithm could objectively quantify nerves in a digital histological dataset of thyroid cancers immunostained for the specific pan-neuronal marker PGP9.5. Methods: A training dataset of 30 digitised papillary thyroid cancer immunohistochemistry slides were manually screened for PGP9.5 positive nerves, annotated using QuPath (2). 1500 true positive nerves were identified. This dataset was used to train the deep-learning algorithm. First, a colour filter identified pixels positive for PGP9.5 (Model 1). Then, a manually tuned colour filter and clustering method identified Regions of Interest (ROIs): clusters of PGP9.5 positive pixels that may represent nerves (Model 2). These ROIs were classified by the deep learning model (Model 3), based on a Convolutional Neural Network with approximately 2.7 million trainable parameters. The full model was run on a testing dataset of thyroid cancer slides (n=5), containing 7-35 manually identified nerves per slide. Model predictions were validated by human assessment of a random subset of 100 ROIs. The code was written in Python and the model was developed in Keras. Results: Model 2 (colour filter + clustering only) identified median 2247 ROIs per slide (range 349-4748), which included 94% of the manually identified nerves. However, most Model 2 ROIs were false positives (FP) (median 85% FP, range 68-95%), indicating that Model 2 was sensitive but poorly specific for nerve identification. Model 3 (deep learning) identified fewer ROIs per slide (median 1068, range 150-3091), but still correctly identified 94% of manually annotated nerves. Of the additionally detected ROIs in Model 3, median FP rate was 35%. However, in slides where higher non-specific immunostaining was present, then the number of FP ROIs was >90%. Conclusion: Simple image analysis based on colour filtration/cluster analysis does not accurately identify immunohistochemically labelled nerves in thyroid cancers. Addition of deep-learning improves sensitivity with acceptable specificity, and significantly increases the number of true positive nerves detected compared to manual counting. However, the current deep learning model lacks specificity in the setting of non-specific immunostaining, which is a basis for improving further iterations of this model to facilitate study of the significance of innervation of thyroid and other cancers. References: (1) Faulkner et al. Cancer Discovery (2019) doi: 10.1158/2159-8290.CD-18-1398. (2) Bankhead P et al. Sci Rep 2017;7(1):16878.


2021 ◽  
Author(s):  
Joshua Levy ◽  
Christopher M Navas ◽  
Joan A Chandra ◽  
Brock Christensen ◽  
Louis J Vaickus ◽  
...  

BACKGROUND AND AIMS: Evaluation for dyssynergia is the most common reason that gastroenterologists refer patients for anorectal manometry, because dyssynergia is amenable to biofeedback by physical therapists. High-definition anorectal manometry (3D-HDAM) is a promising technology to evaluate anorectal physiology, but adoption remains limited by its sheer complexity. We developed a 3D-HDAM deep learning algorithm to evaluate for dyssynergia. METHODS: Spatial-temporal data were extracted from consecutive 3D-HDAM studies performed between 2018-2020 at a tertiary institution. The technical procedure and gold standard definition of dyssynergia were based on the London consensus, adapted to the needs of 3D-HDAM technology. Three machine learning models were generated: (1) traditional machine learning informed by conventional anorectal function metrics, (2) deep learning, and (3) a hybrid approach. Diagnostic accuracy was evaluated using bootstrap sampling to calculate area-under-the-curve (AUC). To evaluate overfitting, models were validated by adding 502 simulated defecation maneuvers with diagnostic ambiguity. RESULTS: 302 3D-HDAM studies representing 1,208 simulated defecation maneuvers were included (average age 55.2 years; 80.5% women). The deep learning model had comparable diagnostic accuracy (AUC=0.91 [95% confidence interval 0.89-0.93]) to traditional (AUC=0.93[0.92-0.95]) and hybrid (AUC=0.96[0.94-0.97]) predictive models in training cohorts. However, the deep learning model handled ambiguous tests more cautiously than other models; the deep learning model was more likely to designate an ambiguous test as inconclusive (odds ratio=4.21[2.78-6.38]) versus traditional/hybrid approaches. CONCLUSIONS: By considering complex spatial-temporal information beyond conventional anorectal function metrics, deep learning on 3D-HDAM technology may enable gastroenterologists to reliably identify and manage dyssynergia in broader practice.


10.2196/15963 ◽  
2020 ◽  
Vol 8 (4) ◽  
pp. e15963 ◽  
Author(s):  
Yi-Ying Wu ◽  
Tzu-Chuan Huang ◽  
Ren-Hua Ye ◽  
Wen-Hui Fang ◽  
Shiue-Wei Lai ◽  
...  

Background Bone marrow aspiration and biopsy remain the gold standard for the diagnosis of hematological diseases despite the development of flow cytometry (FCM) and molecular and gene analyses. However, the interpretation of the results is laborious and operator dependent. Furthermore, the obtained results exhibit inter- and intravariations among specialists. Therefore, it is important to develop a more objective and automated analysis system. Several deep learning models have been developed and applied in medical image analysis but not in the field of hematological histology, especially for bone marrow smear applications. Objective The aim of this study was to develop a deep learning model (BMSNet) for assisting hematologists in the interpretation of bone marrow smears for faster diagnosis and disease monitoring. Methods From January 1, 2016, to December 31, 2018, 122 bone marrow smears were photographed and divided into a development cohort (N=42), a validation cohort (N=70), and a competition cohort (N=10). The development cohort included 17,319 annotated cells from 291 high-resolution photos. In total, 20 photos were taken for each patient in the validation cohort and the competition cohort. This study included eight annotation categories: erythroid, blasts, myeloid, lymphoid, plasma cells, monocyte, megakaryocyte, and unable to identify. BMSNet is a convolutional neural network with the YOLO v3 architecture, which detects and classifies single cells in a single model. Six visiting staff members participated in a human-machine competition, and the results from the FCM were regarded as the ground truth. Results In the development cohort, according to 6-fold cross-validation, the average precision of the bounding box prediction without consideration of the classification is 67.4%. After removing the bounding box prediction error, the precision and recall of BMSNet were similar to those of the hematologists in most categories. In detecting more than 5% of blasts in the validation cohort, the area under the curve (AUC) of BMSNet (0.948) was higher than the AUC of the hematologists (0.929) but lower than the AUC of the pathologists (0.985). In detecting more than 20% of blasts, the AUCs of the hematologists (0.981) and pathologists (0.980) were similar and were higher than the AUC of BMSNet (0.942). Further analysis showed that the performance difference could be attributed to the myelodysplastic syndrome cases. In the competition cohort, the mean value of the correlations between BMSNet and FCM was 0.960, and the mean values of the correlations between the visiting staff and FCM ranged between 0.952 and 0.990. Conclusions Our deep learning model can assist hematologists in interpreting bone marrow smears by facilitating and accelerating the detection of hematopoietic cells. However, a detailed morphological interpretation still requires trained hematologists.


10.2196/15931 ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. e15931 ◽  
Author(s):  
Chin-Sheng Lin ◽  
Chin Lin ◽  
Wen-Hui Fang ◽  
Chia-Jung Hsu ◽  
Sy-Jou Chen ◽  
...  

Background The detection of dyskalemias—hypokalemia and hyperkalemia—currently depends on laboratory tests. Since cardiac tissue is very sensitive to dyskalemia, electrocardiography (ECG) may be able to uncover clinically important dyskalemias before laboratory results. Objective Our study aimed to develop a deep-learning model, ECG12Net, to detect dyskalemias based on ECG presentations and to evaluate the logic and performance of this model. Methods Spanning from May 2011 to December 2016, 66,321 ECG records with corresponding serum potassium (K+) concentrations were obtained from 40,180 patients admitted to the emergency department. ECG12Net is an 82-layer convolutional neural network that estimates serum K+ concentration. Six clinicians—three emergency physicians and three cardiologists—participated in human-machine competition. Sensitivity, specificity, and balance accuracy were used to evaluate the performance of ECG12Net with that of these physicians. Results In a human-machine competition including 300 ECGs of different serum K+ concentrations, the area under the curve for detecting hypokalemia and hyperkalemia with ECG12Net was 0.926 and 0.958, respectively, which was significantly better than that of our best clinicians. Moreover, in detecting hypokalemia and hyperkalemia, the sensitivities were 96.7% and 83.3%, respectively, and the specificities were 93.3% and 97.8%, respectively. In a test set including 13,222 ECGs, ECG12Net had a similar performance in terms of sensitivity for severe hypokalemia (95.6%) and severe hyperkalemia (84.5%), with a mean absolute error of 0.531. The specificities for detecting hypokalemia and hyperkalemia were 81.6% and 96.0%, respectively. Conclusions A deep-learning model based on a 12-lead ECG may help physicians promptly recognize severe dyskalemias and thereby potentially reduce cardiac events.


2021 ◽  
Vol 39 (15_suppl) ◽  
pp. 1556-1556
Author(s):  
Alexander S. Rich ◽  
Barry Leybovich ◽  
Melissa Estevez ◽  
Jamie Irvine ◽  
Nisha Singh ◽  
...  

1556 Background: Identifying patients with a particular cancer and determining the date of that diagnosis from EHR data is important for selecting real world research cohorts and conducting downstream analyses. However, cancer diagnoses and their dates are often not accurately recorded in the EHR in a structured form. We developed a unified deep learning model for identifying patients with NSCLC and their initial and advanced diagnosis date(s). Methods: The study used a cohort of 52,834 patients with lung cancer ICD codes from the nationwide deidentified Flatiron Health EHR-derived database. For all patients in the cohort, abstractors used an in-house technology-enabled platform to identify an NSCLC diagnosis, advanced disease, and relevant diagnosis date(s) via chart review. Advanced NSCLC was defined as stage IIIB or IV disease at diagnosis or early stage disease that recurred or progressed. The deep learning model was trained on 38,517 patients, with a separate 14,317 patient test cohort. The model input was a set of sentences containing keywords related to (a)NSCLC, extracted from a patient’s EHR documents. Each sentence was associated with a date, using the document timestamp or, if present, a date mentioned explicitly in the sentence. The sentences were processed by a GRU network, followed by an attentional network that integrated across sentences, outputting a prediction of whether the patient had been diagnosed with (a)NSCLC and the diagnosis date(s) if so. We measured sensitivity and positive predictive value (PPV) of extracting the presence of initial and advanced diagnoses in the test cohort. Among patients with both model-extracted and abstracted diagnosis dates, we also measured 30-day accuracy, defined as the proportion of patients where the dates match to within 30 days. Real world overall survival (rwOS) for patients abstracted vs. model-extracted as advanced was calculated using Kaplan-Meier methods (index date: abstracted vs. model-extracted advanced diagnosis date). Results: Results in the Table show the sensitivity, PPV, and accuracy of the model extracted diagnoses and dates. RwOS was similar using model extracted aNSCLC diagnosis dates (median = 13.7) versus abstracted diagnosis dates (median = 13.3), with a difference of 0.4 months (95% CI = [0.0, 0.8]). Conclusions: Initial and advanced diagnosis of NSCLC and dates of diagnosis can be accurately extracted from unstructured clinical text using a deep learning algorithm. This can further enable the use of EHR data for research on real-world treatment patterns and outcomes analysis, and other applications such as clinical trials matching. Future work should aim to understand the impact of model errors on downstream analyses.[Table: see text]


Sign in / Sign up

Export Citation Format

Share Document