scholarly journals Deep learning assisted data inspection for radio astronomy

2020 ◽  
Vol 496 (2) ◽  
pp. 1517-1529
Author(s):  
Michael Mesarcik ◽  
Albert-Jan Boonstra ◽  
Christiaan Meijer ◽  
Walter Jansen ◽  
Elena Ranguelova ◽  
...  

ABSTRACT Modern radio telescopes combine thousands of receivers, long-distance networks, large-scale compute hardware, and intricate software. Due to this complexity, failures occur relatively frequently. In this work, we propose novel use of unsupervised deep learning to diagnose system health for modern radio telescopes. The model is a convolutional variational autoencoder (VAE) that enables the projection of the high-dimensional time–frequency data to a low-dimensional prescriptive space. Using this projection, telescope operators are able to visually inspect failures thereby maintaining system health. We have trained and evaluated the performance of the VAE quantitatively in controlled experiments on simulated data from HERA. Moreover, we present a qualitative assessment of the model trained and tested on real LOFAR data. Through the use of a naïve SVM classifier on the projected synthesized data, we show that there is a trade-off between the dimensionality of the projection and the number of compounded features in a given spectrogram. The VAE and SVM combination scores between 65 per cent and 90 per cent accuracy depending on the number of features in a given input. Finally, we show the prototype system-health-diagnostic web framework that integrates the evaluated model. The system is currently undergoing testing at the ASTRON observatory.

2016 ◽  
Vol 2016 ◽  
pp. 1-18
Author(s):  
Jian Luo ◽  
Jin Tang ◽  
Xiaoming Xiao

A cloud based health care system is proposed in this paper for the elderly by providing abnormal gait behavior detection, classification, online diagnosis, and remote aid service. Intelligent mobile terminals with triaxial acceleration sensor embedded are used to capture the movement and ambulation information of elderly. The collected signals are first enhanced by a Kalman filter. And the magnitude of signal vector features is then extracted and decomposed into a linear combination of enhanced Gabor atoms. The Wigner-Ville analysis method is introduced and the problem is studied by joint time-frequency analysis. In order to solve the large-scale abnormal behavior data lacking problem in training process, a cloud based incremental SVM (CI-SVM) learning method is proposed. The original abnormal behavior data are first used to get the initial SVM classifier. And the larger abnormal behavior data of elderly collected by mobile devices are then gathered in cloud platform to conduct incremental training and get the new SVM classifier. By the CI-SVM learning method, the knowledge of SVM classifier could be accumulated due to the dynamic incremental learning. Experimental results demonstrate that the proposed method is feasible and can be applied to aged care, emergency aid, and related fields.


2007 ◽  
Vol 08 (04) ◽  
pp. 321-336 ◽  
Author(s):  
HIROAKI ECHIGO ◽  
YOSHITAKA SHIBATA ◽  
KAZUO TAKAHATA

In this paper, a robust and large scale resident-oriented safety information system on the occurrence of the various disasters constructed over a nationwide high-speed network is introduced. The evacuated residents can register their safety information to the local safety information servers in the evacuation area whether they can safely evaluated or not using mobile PCs or terminals at the evacuation place or mobile terminals on the way of evacuation. All of the local information servers are connected each other by wireless network and the safety information can be sent an upper-layer database in the district area and finally integrated into a district safety information in that region. In our system some of the damaged local servers due to the disaster can be detected and recovered by the upper-layer database server. On the other hand, the upper-layer database servers are backed up by mirror servers located at mutually different locations with long distance to isolate the influence of the same disaster when the some of them were destroyed or disordered. Thus, by introducing two levels of redundancy and backup functions, more large scale and robust safety information database system can be realized. A prototype system is constructed over Japan Gigabit Network (JGN2) to evaluate the performance of the suggested system. Through the performance evaluation for the prototype system, we could verify the usability and scalability of our suggested system.


Sensors ◽  
2019 ◽  
Vol 19 (14) ◽  
pp. 3079 ◽  
Author(s):  
Attila Reiss ◽  
Ina Indlekofer ◽  
Philip Schmidt ◽  
Kristof Van Laerhoven

Photoplethysmography (PPG)-based continuous heart rate monitoring is essential in a number of domains, e.g., for healthcare or fitness applications. Recently, methods based on time-frequency spectra emerged to address the challenges of motion artefact compensation. However, existing approaches are highly parametrised and optimised for specific scenarios of small, public datasets. We address this fragmentation by contributing research into the robustness and generalisation capabilities of PPG-based heart rate estimation approaches. First, we introduce a novel large-scale dataset (called PPG-DaLiA), including a wide range of activities performed under close to real-life conditions. Second, we extend a state-of-the-art algorithm, significantly improving its performance on several datasets. Third, we introduce deep learning to this domain, and investigate various convolutional neural network architectures. Our end-to-end learning approach takes the time-frequency spectra of synchronised PPG- and accelerometer-signals as input, and provides the estimated heart rate as output. Finally, we compare the novel deep learning approach to classical methods, performing evaluation on four public datasets. We show that on large datasets the deep learning model significantly outperforms other methods: The mean absolute error could be reduced by 31 % on the new dataset PPG-DaLiA, and by 21 % on the dataset WESAD.


2017 ◽  
Vol 11 (01) ◽  
pp. 85-109 ◽  
Author(s):  
Samira Pouyanfar ◽  
Shu-Ching Chen

With the explosion of multimedia data, semantic event detection from videos has become a demanding and challenging topic. In addition, when the data has a skewed data distribution, interesting event detection also needs to address the data imbalance problem. The recent proliferation of deep learning has made it an essential part of many Artificial Intelligence (AI) systems. Till now, various deep learning architectures have been proposed for numerous applications such as Natural Language Processing (NLP) and image processing. Nonetheless, it is still impracticable for a single model to work well for different applications. Hence, in this paper, a new ensemble deep learning framework is proposed which can be utilized in various scenarios and datasets. The proposed framework is able to handle the over-fitting issue as well as the information losses caused by single models. Moreover, it alleviates the imbalanced data problem in real-world multimedia data. The whole framework includes a suite of deep learning feature extractors integrated with an enhanced ensemble algorithm based on the performance metrics for the imbalanced data. The Support Vector Machine (SVM) classifier is utilized as the last layer of each deep learning component and also as the weak learners in the ensemble module. The framework is evaluated on two large-scale and imbalanced video datasets (namely, disaster and TRECVID). The extensive experimental results illustrate the advantage and effectiveness of the proposed framework. It also demonstrates that the proposed framework outperforms several well-known deep learning methods, as well as the conventional features integrated with different classifiers.


Author(s):  
Gengshen Wu ◽  
Zijia Lin ◽  
Jungong Han ◽  
Li Liu ◽  
Guiguang Ding ◽  
...  

Despite its great success, matrix factorization based cross-modality hashing suffers from two problems: 1) there is no engagement between feature learning and binarization; and 2) most existing methods impose the relaxation strategy by discarding the discrete constraints when learning the hash function, which usually yields suboptimal solutions. In this paper, we propose a novel multimodal hashing framework, referred as Unsupervised Deep Cross-Modal Hashing (UDCMH), for multimodal data search in a self-taught manner via integrating deep learning and matrix factorization with binary latent factor models. On one hand, our unsupervised deep learning framework enables the feature learning to be jointly optimized with the binarization. On the other hand, the hashing system based on the binary latent factor models can generate unified binary codes by solving a discrete-constrained objective function directly with no need for a relaxation step. Moreover, novel Laplacian constraints are incorporated into the objective function, which allow to preserve not only the nearest neighbors that are commonly considered in the literature but also the farthest neighbors of data, even if the semantic labels are not available. Extensive experiments on multiple datasets highlight the superiority of the proposed framework over several state-of-the-art baselines.


Author(s):  
Ron Harris

Before the seventeenth century, trade across Eurasia was mostly conducted in short segments along the Silk Route and Indian Ocean. Business was organized in family firms, merchant networks, and state-owned enterprises, and dominated by Chinese, Indian, and Arabic traders. However, around 1600 the first two joint-stock corporations, the English and Dutch East India Companies, were established. This book tells the story of overland and maritime trade without Europeans, of European Cape Route trade without corporations, and of how new, large-scale, and impersonal organizations arose in Europe to control long-distance trade for more than three centuries. It shows that by 1700, the scene and methods for global trade had dramatically changed: Dutch and English merchants shepherded goods directly from China and India to northwestern Europe. To understand this transformation, the book compares the organizational forms used in four major regions: China, India, the Middle East, and Western Europe. The English and Dutch were the last to leap into Eurasian trade, and they innovated in order to compete. They raised capital from passive investors through impersonal stock markets and their joint-stock corporations deployed more capital, ships, and agents to deliver goods from their origins to consumers. The book explores the history behind a cornerstone of the modern economy, and how this organizational revolution contributed to the formation of global trade and the creation of the business corporation as a key factor in Europe's economic rise.


2020 ◽  
Author(s):  
Anusha Ampavathi ◽  
Vijaya Saradhi T

UNSTRUCTURED Big data and its approaches are generally helpful for healthcare and biomedical sectors for predicting the disease. For trivial symptoms, the difficulty is to meet the doctors at any time in the hospital. Thus, big data provides essential data regarding the diseases on the basis of the patient’s symptoms. For several medical organizations, disease prediction is important for making the best feasible health care decisions. Conversely, the conventional medical care model offers input as structured that requires more accurate and consistent prediction. This paper is planned to develop the multi-disease prediction using the improvised deep learning concept. Here, the different datasets pertain to “Diabetes, Hepatitis, lung cancer, liver tumor, heart disease, Parkinson’s disease, and Alzheimer’s disease”, from the benchmark UCI repository is gathered for conducting the experiment. The proposed model involves three phases (a) Data normalization (b) Weighted normalized feature extraction, and (c) prediction. Initially, the dataset is normalized in order to make the attribute's range at a certain level. Further, weighted feature extraction is performed, in which a weight function is multiplied with each attribute value for making large scale deviation. Here, the weight function is optimized using the combination of two meta-heuristic algorithms termed as Jaya Algorithm-based Multi-Verse Optimization algorithm (JA-MVO). The optimally extracted features are subjected to the hybrid deep learning algorithms like “Deep Belief Network (DBN) and Recurrent Neural Network (RNN)”. As a modification to hybrid deep learning architecture, the weight of both DBN and RNN is optimized using the same hybrid optimization algorithm. Further, the comparative evaluation of the proposed prediction over the existing models certifies its effectiveness through various performance measures.


2017 ◽  
Vol 14 (9) ◽  
pp. 1513-1517 ◽  
Author(s):  
Rodrigo F. Berriel ◽  
Andre Teixeira Lopes ◽  
Alberto F. de Souza ◽  
Thiago Oliveira-Santos
Keyword(s):  

Author(s):  
Mathieu Turgeon-Pelchat ◽  
Samuel Foucher ◽  
Yacine Bouroubi

Sign in / Sign up

Export Citation Format

Share Document