scholarly journals Insight into Applications of Deep Learning

NCICCNDA ◽  
2018 ◽  
Author(s):  
Aishwarya T ◽  
Ravi Kumar V
Keyword(s):  
2020 ◽  
Vol 11 (1) ◽  
pp. 22-26
Author(s):  
S.V. Tsymbal ◽  

The digital revolution has transformed the way people access information, communicate and learn. It is teachers' responsibility to set up environments and opportunities for deep learning experiences that can uncover and boost learners’ capacities. Twentyfirst century competences can be seen as necessary to navigate contemporary and future life, shaped by technology that changes workplaces and lifestyles. This study explores the concept of digital competence and provide insight into the European Framework for the Digital Competence of Educators.


Entropy ◽  
2020 ◽  
Vol 22 (12) ◽  
pp. 1365
Author(s):  
Bogdan Muşat ◽  
Răzvan Andonie

Convolutional neural networks utilize a hierarchy of neural network layers. The statistical aspects of information concentration in successive layers can bring an insight into the feature abstraction process. We analyze the saliency maps of these layers from the perspective of semiotics, also known as the study of signs and sign-using behavior. In computational semiotics, this aggregation operation (known as superization) is accompanied by a decrease of spatial entropy: signs are aggregated into supersign. Using spatial entropy, we compute the information content of the saliency maps and study the superization processes which take place between successive layers of the network. In our experiments, we visualize the superization process and show how the obtained knowledge can be used to explain the neural decision model. In addition, we attempt to optimize the architecture of the neural model employing a semiotic greedy technique. To the extent of our knowledge, this is the first application of computational semiotics in the analysis and interpretation of deep neural networks.


RMD Open ◽  
2020 ◽  
Vol 6 (1) ◽  
pp. e001063 ◽  
Author(s):  
Berend Stoel

After decades of basic research with many setbacks, artificial intelligence (AI) has recently obtained significant breakthroughs, enabling computer programs to outperform human interpretation of medical images in very specific areas. After this shock wave that probably exceeds the impact of the first AI victory of defeating the world chess champion in 1997, some reflection may be appropriate on the consequences for clinical imaging in rheumatology. In this narrative review, a short explanation is given about the various AI techniques, including ‘deep learning’, and how these have been applied to rheumatological imaging, focussing on rheumatoid arthritis and systemic sclerosis as examples. By discussing the principle limitations of AI and deep learning, this review aims to give insight into possible future perspectives of AI applications in rheumatology.


Iproceedings ◽  
10.2196/15225 ◽  
2019 ◽  
Vol 5 (1) ◽  
pp. e15225
Author(s):  
Felipe Masculo ◽  
Jorn op den Buijs ◽  
Mariana Simons ◽  
Aki Harma

Background A Personal Emergency Response Service (PERS) enables an aging population to receive help quickly when an emergency situation occurs. The reasons that trigger a PERS alert are varied, including a sudden worsening of a chronic condition, a fall, or other injury. Every PERS case is documented by the response center using a combination of structured variables and free text notes. The text notes, in particular, contain a wealth of information in case of an incident such as contextual information, details about the situation, symptoms and more. Analysis of these notes at a population level could provide insight into the various situations that cause PERS medical alerts. Objective The objectives of this study were to (1) develop methods to enable the large-scale analysis of text notes from a PERS response center, and (2) to apply these methods to a large dataset and gain insight into the different situations that cause medical alerts. Methods More than 2.5 million deidentified PERS case text notes were used to train a document embedding model (ie, a deep learning Recurrent Neural Network [RNN] that takes the medical alert text note as input and produces a corresponding fixed length vector representation as output). We applied this model to 100,000 PERS text notes related to medical incidents that resulted in emergency department admission. Finally, we used t-SNE, a nonlinear dimensionality reduction method, to visualize the vector representation of the text notes in 2D as part of a graphical user interface that enabled interactive exploration of the dataset and visual analytics. Results Visual analysis of the vectors revealed the existence of several well-separated clusters of incidents such as fall, stroke/numbness, seizure, breathing problems, chest pain, and nausea, each of them related to the emergency situation encountered by the patient as recorded in an existing structured variable. In addition, subclusters were identified within each cluster which grouped cases based on additional features extracted from the PERS text notes and not available in the existing structured variables. For example, the incidents labeled as falls (n=37,842) were split into several subclusters corresponding to falls with bone fracture (n=1437), falls with bleeding (n=4137), falls caused by dizziness (n=519), etc. Conclusions The combination of state-of-the-art natural language processing, deep learning, and visualization techniques enables the large-scale analysis of medical alert text notes. This analysis demonstrates that, in addition to falls alerts, the PERS service is broadly used to signal for help in situations often related to underlying chronic conditions and acute symptoms such as respiratory distress, chest pain, diabetic reaction, etc. Moreover, the proposed techniques enable the extraction of structured information related to the medical alert from unstructured text with minimal human supervision. This structured information could be used, for example, to track trends over time, to generate concise medical alert summaries, and to create predictive models for desired outcomes.


2021 ◽  
Author(s):  
Charles A Ellis ◽  
Robyn L Miller ◽  
Vince Calhoun

The frequency domain of electroencephalography (EEG) data has developed as a particularly important area of EEG analysis. EEG spectra have been analyzed with explainable machine learning and deep learning methods. However, as deep learning has developed, most studies use raw EEG data, which is not well-suited for traditional explainability methods. Several studies have introduced methods for spectral insight into classifiers trained on raw EEG data. These studies have provided global insight into the frequency bands that are generally important to a classifier but do not provide local insight into the frequency bands important for the classification of individual samples. This local explainability could be particularly helpful for EEG analysis domains like sleep stage classification that feature multiple evolving states. We present a novel local spectral explainability approach and use it to explain a convolutional neural network trained for automated sleep stage classification. We use our approach to show how the relative importance of different frequency bands varies over time and even within the same sleep stages. Furthermore, to better understand how our approach compares to existing methods, we compare a global estimate of spectral importance generated from our local results with an existing global spectral importance approach. We find that the δ band is most important for most sleep stages, though β is most important for the non-rapid eye movement 2 (NREM2) sleep stage. Additionally, θ is particularly important for identifying Awake and NREM1 samples. Our study represents the first approach developed for local spectral insight into deep learning classifiers trained on raw EEG time series.


2020 ◽  
Vol 12 (6) ◽  
pp. 1006 ◽  
Author(s):  
Davide Cozzolino ◽  
Luisa Verdoliva ◽  
Giuseppe Scarpa ◽  
Giovanni Poggi

We propose a new method for SAR image despeckling, which performs nonlocal filtering with a deep learning engine. Nonlocal filtering has proven very effective for SAR despeckling. The key idea is to exploit image self-similarities to estimate the hidden signal. In its simplest form, pixel-wise nonlocal means, the target pixel is estimated through a weighted average of neighbors, with weights chosen on the basis of a patch-wise measure of similarity. Here, we keep the very same structure of plain nonlocal means, to ensure interpretability of results, but use a convolutional neural network to assign weights to estimators. Suitable nonlocal layers are used in the network to take into account information in a large analysis window. Experiments on both simulated and real-world SAR images show that the proposed method exhibits state-of-the-art performance. In addition, the comparison of weights generated by conventional and deep learning-based nonlocal means provides new insight into the potential and limits of nonlocal information for SAR despeckling.


2020 ◽  
Author(s):  
Raju Singh

This report is an insight into the world of deep learning and CNN networks. It is an attempt to perform classification using neural network and deep learning for a given dataset (which is a subset from the MNIST dataset). The MNIST dataset contains 70,000 images of handwritten digits, divided into 60,000 training images and 10,000 testing images.


2019 ◽  
Author(s):  
Eric Prince ◽  
Todd C. Hankinson

ABSTRACTHigh throughput data is commonplace in biomedical research as seen with technologies such as single-cell RNA sequencing (scRNA-seq) and other Next Generation Sequencing technologies. As these techniques continue to be increasingly utilized it is critical to have analysis tools that can identify meaningful complex relationships between variables (i.e., in the case of scRNA-seq: genes) in a way such that human bias is absent. Moreover, it is equally paramount that both linear and non-linear (i.e., one-to-many) variable relationships be considered when contrasting datasets. HD Spot is a deep learning-based framework that generates an optimal interpretable classifier a given high-throughput dataset using a simple genetic algorithm as well as an autoencoder to classifier transfer learning approach. Using four unique publicly available scRNA-seq datasets with published ground truth, we demonstrate the robustness of HD Spot and the ability to identify ontologically accurate gene lists for a given data subset. HD Spot serves as a bioinformatic tool to allow novice and advanced analysts to gain complex insight into their respective datasets enabling novel hypotheses development.


Author(s):  
Joan Serrà

Deep learning is an undeniably hot topic, not only within both academia and industry, but also among society and the media. The reasons for the advent of its popularity are manifold: unprecedented availability of data and computing power, some innovative methodologies, minor but significant technical tricks, etc. However, interestingly, the current success and practice of deep learning seems to be uncorrelated with its theoretical, more formal understanding. And with that, deep learning’s state-of-the-art presents a number of unintuitive properties or situations. In this note, I highlight some of these unintuitive properties, trying to show relevant recent work, and expose the need to get insight into them, either by formal or more empirical means.


Sign in / Sign up

Export Citation Format

Share Document