scholarly journals Questioning Domain Adaptation in Myoelectric Hand Prostheses Control: An Inter- and Intra-Subject Study

Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7500
Author(s):  
Giulio Marano ◽  
Cristina Brambilla ◽  
Robert Mihai Mira ◽  
Alessandro Scano ◽  
Henning Müller ◽  
...  

One major challenge limiting the use of dexterous robotic hand prostheses controlled via electromyography and pattern recognition relates to the important efforts required to train complex models from scratch. To overcome this problem, several studies in recent years proposed to use transfer learning, combining pre-trained models (obtained from prior subjects) with training sessions performed on a specific user. Although a few promising results were reported in the past, it was recently shown that the use of conventional transfer learning algorithms does not increase performance if proper hyperparameter optimization is performed on the standard approach that does not exploit transfer learning. The objective of this paper is to introduce novel analyses on this topic by using a random forest classifier without hyperparameter optimization and to extend them with experiments performed on data recorded from the same patient, but in different data acquisition sessions. Two domain adaptation techniques were tested on the random forest classifier, allowing us to conduct experiments on healthy subjects and amputees. Differently from several previous papers, our results show that there are no appreciable improvements in terms of accuracy, regardless of the transfer learning techniques tested. The lack of adaptive learning is also demonstrated for the first time in an intra-subject experimental setting when using as a source ten data acquisitions recorded from the same subject but on five different days.

2020 ◽  
Author(s):  
Sonam Wangchuk ◽  
Tobias Bolch

<p>An accurate detection and mapping of glacial lakes in the Alpine regions such as the Himalayas, the Alps and the Andes are challenged by many factors. These factors include 1) a small size of glacial lakes, 2) cloud cover in optical satellite images, 3) cast shadows from mountains and clouds, 4) seasonal snow in satellite images, 5) varying degree of turbidity amongst glacial lakes, and 6) frozen glacial lake surface. In our study, we propose a fully automated approach, that overcomes most of the above mentioned challenges, to detect and map glacial lakes accurately using multi-source data and machine learning techniques such as the random forest classifier algorithm. The multi-source data are from the Sentinel-1 Synthetic Aperture Radar data (radar backscatter), the Sentinel-2 multispectral instrument data (NDWI), and the SRTM digital elevation model (slope). We use these data as inputs for the rule-based segmentation of potential glacial lakes, where decision rules are implemented from the expert system. The potential glacial lake polygons are then classified either as glacial lakes or non-glacial lakes by the trained and tested random forest classifier algorithm. The performance of the method was assessed in eight test sites located across the Alpine regions (e.g. the Boshula mountain range and Koshi basin in the Himalayas, the Tajiks Pamirs, the Swiss Alps and the Peruvian Andes) of the word. We show that the proposed method performs efficiently irrespective of geographic, geologic, climatic, and glacial lake conditions.</p>


2021 ◽  
Vol 23 (08) ◽  
pp. 532-537
Author(s):  
Cherlakola Abhinav Reddy ◽  
◽  
Sai Nitesh Gadiraju ◽  
Dr. Samala Nagaraj ◽  
◽  
...  

Online media has progressively obtained integral to the route billions of individuals experience news and occasions, frequently bypassing writers—the conventional guardians of breaking news. Occasions,in reality, make a relating spike of posts (tweets) on Twitter. This projects a great deal of significance on the validity of data found via online media stages like Twitter. We have utilized different managed learning techniques like Naïve Bayes, Decision Trees, and Support Vector Machines on the information to separate tweets among genuine and counterfeit news. For our AI models, we have utilized tweet and client highlights as our indicators. We accomplished a precision of 88% utilizing the Random Forest classifier and 88% utilizing the Decision tree. Notwithstanding, we accept that breaking down client records would build the accuracy of our models.


2021 ◽  
Vol 19 (6) ◽  
pp. 584-602
Author(s):  
Lucian Jose Gonçales ◽  
Kleinner Farias ◽  
Lucas Kupssinskü ◽  
Matheus Segalotto

EEG signals are a relevant indicator for measuring aspects related to human factors in Software Engineering. EEG is used in software engineering to train machine learning techniques for a wide range of applications, including classifying task difficulty, and developers’ level of experience. The EEG signal contains noise such as abnormal readings, electrical interference, and eye movements, which are usually not of interest to the analysis, and therefore contribute to the lack of precision of the machine learning techniques. However, research in software engineering has not evidenced the effectiveness when applying these filters on EEG signals. The objective of this work is to analyze the effectiveness of filters on EEG signals in the software engineering context. As literature did not focus on the classification of developers’ code comprehension, this study focuses on the analysis of the effectiveness of applying EEG filters for training a machine learning technique to classify developers' code comprehension. A Random Forest (RF) machine learning technique was trained with filtered EEG signals to classify the developers' code comprehension. This study also trained another random forest classifier with unfiltered EEG data. Both models were trained using 10-fold cross-validation. This work measures the classifiers' effectiveness using the f-measure metric. This work used the t-test, Wilcoxon, and U Mann Whitney to analyze the difference in the effectiveness measures (f-measure) between the classifier trained with filtered EEG and the classifier trained with unfiltered EEG. The tests pointed out that there is a significant difference after applying EEG filters to classify developers' code comprehension with the random forest classifier. The conclusion is that the use of EEG filters significantly improves the effectivity to classify code comprehension using the random forest technique.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Amirhossein Mostajabi ◽  
Hamidreza Karami ◽  
Mohammad Azadifar ◽  
Alireza Ghasemi ◽  
Marcos Rubinstein ◽  
...  

AbstractElectromagnetic Time Reversal (EMTR) has been used to locate different types of electromagnetic sources. We propose a novel technique based on the combination of EMTR and Machine Learning (ML) for source localization. We show for the first time that ML techniques can be used in conjunction with EMTR to reduce the required number of sensors to only one for the localization of electromagnetic sources in the presence of scatterers. In the EMTR part, we use 2D-FDTD method to generate 2D profiles of the vertical electric field as RGB images. Next, in the ML part, we take advantage of transfer learning techniques by using the pretrained VGG-19 Convolutional Neural Network (CNN) as the feature extractor tool. To the best of our knowledge, this is the first time that the knowledge of pretrained CNNs is applied to simulation-generated images. We demonstrate the skill of the developed methodology in localizing two kinds of electromagnetic sources, namely RF sources with a bandwidth of 0.1–10 MHz and lightning impulses. For the localization of lightning, based on the experimental recordings in the Säntis region, the new approach enables accurate 2D lightning localization using only one sensor, as opposed to current lightning location systems that need at least two sensors to operate.


Cyber-attacks are the attempts made by an individual or an organization deliberately, to breach the information system mainly computers of another individual or organization. These attacks have risen in recent years due to various reasons posing the need for systems that can use adaptive learning techniques to detect and mitigate these attacks at an early stage. Phishing is one of the significant cyber-attacks. According to global security report 2019, phishing was the major cause of attacks in corporate networks. Phishing attack uses disguised email to achieve its goal. In this attack, attacker masquerade himself as a trusted individual or a company and trick the email recipient into clicking malicious links or attachments. The proposed method provides a testbed for detecting and mitigating various types of phishing attacks. Machine learning techniques are used to build an intelligent system which can detect phishing attacks. This application uses random forest algorithm with AR-Trees (acceptance-rejection tree algorithm) to determine the attacks by considering various datasets available online and new datasets dynamically constructed for making the system ready to mitigate future phishing attacks.


Upon application of supervised machine learning techniques Intrusion Detection Systems (IDSs) are successful in detecting known attacks as they use predefined attack signatures. However, detecting zero-day attacks is challenged because of the scarcity of the labeled instances for zero-day attacks. Advanced research on IDS applies the concept of Transfer Learning (TL) to compensate the scarcity of labeled instances of zero-day attacks by making use of abundant labeled instances present in related domain(s). This paper explores the potential of Inductive and Transductive transfer learning for detecting zero-day attacks experimentally, where inductive TL deals with the presence of minimal labeled instances in the target domain and transductive TL deals with the complete absence of labeled instances in the target domain. The concept of domain adaptation with manifold alignment (DAMA) is applied in inductive TL where the variant of DAMA is proposed to handle transductive TL due to non-availability of labeled instances. NSL_KDD dataset is used for experimentation


Author(s):  
Balaji Sreenivasulu ◽  
◽  
Anjaneyulu Pasala ◽  
Gaikwad Vasanth ◽  
◽  
...  

In computer vision, domain adaptation or transfer learning plays an important role because it learns a target classifier characteristics using labeled data from various distribution. The existing researches mostly focused on minimizing the time complexity of neural networks and it effectively worked on low-level features. However, the existing method failed to concentrate on data augmentation time and cost of labeled data. Moreover, machine learning techniques face difficulty to obtain the large amount of distributed label data. In this research study, the pre-trained network called inception layer is fine-tuned with the augmented data. There are two phases present in this study, where the effectiveness of data augmentation for Inception pre-trained networks is investigated in the first phase. The transfer learning approach is used to enhance the results of the first phase and the Support Vector Machine (SVM) is used to learn all the features extracted from inception layers. The experiments are conducted on a publicly available dataset to estimate the effectiveness of proposed method. The results stated that the proposed method achieved 95.23% accuracy, where the existing techniques namely deep neural network and traditional convolutional networks achieved 87.32% and 91.32% accuracy respectively. This validation results proved that the developed method nearly achieved 4-8% improvement in accuracy than existing techniques.


2019 ◽  
Vol 8 (4) ◽  
pp. 10316-10320

Nowadays, heart disease has become a major disease among the people irrespective of the age. We are seeing this even in children dying due to the heart disease. If we can predict this even before they die, there may be huge chances of surviving. Everybody has various qualities of beat rate (pulse rate) and circulatory strain (blood pressure). We are living in a period of data. Due to the rise in the technology, the amount of data that is generated is increasing daily. Some terabytes of data are being produced and stored. For example, the huge amount of data about the patients is produced in the hospitals such as chest pain, heart rate, blood pressure, pulse rate etc. If we can get this data and apply some machine learning techniques, we can reduce the probability of people dying. In this paper we have done survey using different classification and grouping strategies, for example, KNN, Decision tree classifier, Gaussian Naïve Bayes, Support vector machine, Linear regression, Logistic regression, Random forest classifier, Random forest regression, linear descriptive analysis. We have taken the 14 attributes that are present in the dataset as an input and applying on the dataset which is taken from the UCI repository to develop and accurate model of predicting the heart disease contains colossal (huge) therapeutic (medical) information. In the proposed research, the exhibition of the conclusion model is acquired by using utilizing classification strategies. In this paper proposed an accuracy model to predict whether a person has coronary disease or not. This is implemented by comparing the accuracies of different machine-learning strategies such as KNN, Decision tree classifier, Gaussian Naïve Bayes, SVM, Logistic regression, Random forest classifier, Linear regression, Random forest regression, linear descriptive analysis


Author(s):  
Farzaneh Shoeleh ◽  
Mohammad Mehdi Yadollahi ◽  
Masoud Asadpour

Abstract There is an implicit assumption in machine learning techniques that each new task has no relation to the tasks previously learned. Therefore, tasks are often addressed independently. However, in some domains, particularly reinforcement learning (RL), this assumption is often incorrect because tasks in the same or similar domain tend to be related. In other words, even though tasks are quite different in their specifics, they may have general similarities, such as shared skills, making them related. In this paper, a novel domain adaptation-based method using adversarial networks is proposed to do transfer learning in RL problems. Our proposed method incorporates skills previously learned from source task to speed up learning on a new target task by providing generalization not only within a task but also across different, but related tasks. The experimental results indicate the effectiveness of our method in dealing with RL problems.


Sign in / Sign up

Export Citation Format

Share Document