scholarly journals Development of a Brain–Computer Interface Toggle Switch with Low False-Positive Rate Using Respiration-Modulated Photoplethysmography

Sensors ◽  
2020 ◽  
Vol 20 (2) ◽  
pp. 348 ◽  
Author(s):  
Chang-Hee Han ◽  
Euijin Kim ◽  
Chang-Hwan Im

Asynchronous brain–computer interfaces (BCIs) based on electroencephalography (EEG) generally suffer from poor performance in terms of classification accuracy and false-positive rate (FPR). Thus, BCI toggle switches based on electrooculogram (EOG) signals were developed to toggle on/off synchronous BCI systems. The conventional BCI toggle switches exhibit fast responses with high accuracy; however, they have a high FPR or cannot be applied to patients with oculomotor impairments. To circumvent these issues, we developed a novel BCI toggle switch that users can employ to toggle on or off synchronous BCIs by holding their breath for a few seconds. Two states—normal breath and breath holding—were classified using a linear discriminant analysis with features extracted from the respiration-modulated photoplethysmography (PPG) signals. A real-time BCI toggle switch was implemented with calibration data trained with only 1-min PPG data. We evaluated the performance of our PPG switch by combining it with a steady-state visual evoked potential-based BCI system that was designed to control four external devices, with regard to the true-positive rate and FPR. The parameters of the PPG switch were optimized through an offline experiment with five subjects, and the performance of the switch system was evaluated in an online experiment with seven subjects. All the participants successfully turned on the BCI by holding their breath for approximately 10 s (100% accuracy), and the switch system exhibited a very low FPR of 0.02 false operations per minute, which is the lowest FPR reported thus far. All participants could successfully control external devices in the synchronous BCI mode. Our results demonstrated that the proposed PPG-based BCI toggle switch can be used to implement practical BCIs.

1979 ◽  
Vol 25 (12) ◽  
pp. 2034-2037 ◽  
Author(s):  
L B Sheiner ◽  
L A Wheeler ◽  
J K Moore

Abstract The percentage of mislabeled specimens detected (true-positive rate) and the percentage of correctly labeled specimens misidentified (false-positive rate) were computed for three previously proposed delta check methods and two linear discriminant functions. The true-positive rate was computed from a set of pairs of specimens, each having one member replaced by a member from another pair chosen at random. The relationship between true-positive and false-positive rates was similar among the delta check methods tested, indicating equal performance for all of them over the range of false-positive rate of interest. At a practical false-positive operating rate of about 5%, delta check methods detect only about 50% of mislabeled specimens; even if the actual mislabeling rate is moderate (e.g., 1%), only abot 10% of specimens flagged a by a delta check will actually have been mislabeled.


Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1894
Author(s):  
Chun Guo ◽  
Zihua Song ◽  
Yuan Ping ◽  
Guowei Shen ◽  
Yuhei Cui ◽  
...  

Remote Access Trojan (RAT) is one of the most terrible security threats that organizations face today. At present, two major RAT detection methods are host-based and network-based detection methods. To complement one another’s strengths, this article proposes a phased RATs detection method by combining double-side features (PRATD). In PRATD, both host-side and network-side features are combined to build detection models, which is conducive to distinguishing the RATs from benign programs because that the RATs not only generate traffic on the network but also leave traces on the host at run time. Besides, PRATD trains two different detection models for the two runtime states of RATs for improving the True Positive Rate (TPR). The experiments on the network and host records collected from five kinds of benign programs and 20 famous RATs show that PRATD can effectively detect RATs, it can achieve a TPR as high as 93.609% with a False Positive Rate (FPR) as low as 0.407% for the known RATs, a TPR 81.928% and FPR 0.185% for the unknown RATs, which suggests it is a competitive candidate for RAT detection.


2019 ◽  
Vol 128 (4) ◽  
pp. 970-995
Author(s):  
Rémy Sun ◽  
Christoph H. Lampert

Abstract We study the problem of automatically detecting if a given multi-class classifier operates outside of its specifications (out-of-specs), i.e. on input data from a different distribution than what it was trained for. This is an important problem to solve on the road towards creating reliable computer vision systems for real-world applications, because the quality of a classifier’s predictions cannot be guaranteed if it operates out-of-specs. Previously proposed methods for out-of-specs detection make decisions on the level of single inputs. This, however, is insufficient to achieve low false positive rate and high false negative rates at the same time. In this work, we describe a new procedure named KS(conf), based on statistical reasoning. Its main component is a classical Kolmogorov–Smirnov test that is applied to the set of predicted confidence values for batches of samples. Working with batches instead of single samples allows increasing the true positive rate without negatively affecting the false positive rate, thereby overcoming a crucial limitation of single sample tests. We show by extensive experiments using a variety of convolutional network architectures and datasets that KS(conf) reliably detects out-of-specs situations even under conditions where other tests fail. It furthermore has a number of properties that make it an excellent candidate for practical deployment: it is easy to implement, adds almost no overhead to the system, works with any classifier that outputs confidence scores, and requires no a priori knowledge about how the data distribution could change.


2020 ◽  
Vol 34 (01) ◽  
pp. 1005-1012
Author(s):  
Yu Wang ◽  
Jack Stokes ◽  
Mady Marinescu

In addition to using signatures, antimalware products also detect malicious attacks by evaluating unknown files in an emulated environment, i.e. sandbox, prior to execution on a computer's native operating system. During emulation, a file cannot be scanned indefinitely, and antimalware engines often set the number of instructions to be executed based on a set of heuristics. These heuristics only make the decision of when to halt emulation using partial information leading to the execution of the file for either too many or too few instructions. Also this method is vulnerable if the attackers learn this set of heuristics. Recent research uses a deep reinforcement learning (DRL) model employing a Deep Q-Network (DQN) to learn when to halt the emulation of a file. In this paper, we propose a new DRL-based system which instead employs a modified actor critic (AC) framework for the emulation halting task. This AC model dynamically predicts the best time to halt the file's execution based on a sequence of system API calls. Compared to the earlier models, the new model is capable of handling adversarial attacks by simulating their behaviors using the critic model. The new AC model demonstrates much better performance than both the DQN model and antimalware engine's heuristics. In terms of execution speed (evaluated by the halting decision), the new model halts the execution of unknown files by up to 2.5% earlier than the DQN model and 93.6% earlier than the heuristics. For the task of detecting malicious files, the proposed AC model increases the true positive rate by 9.9% from 69.5% to 76.4% at a false positive rate of 1% compared to the DQN model, and by 83.4% from 41.2% to 76.4% at a false positive rate of 1% compared to a recently proposed LSTM model.


2021 ◽  
Vol 4 ◽  
Author(s):  
Jia He ◽  
Maggie X. Cheng

In machine learning, we often face the situation where the event we are interested in has very few data points buried in a massive amount of data. This is typical in network monitoring, where data are streamed from sensing or measuring units continuously but most data are not for events. With imbalanced datasets, the classifiers tend to be biased in favor of the main class. Rare event detection has received much attention in machine learning, and yet it is still a challenging problem. In this paper, we propose a remedy for the standing problem. Weighting and sampling are two fundamental approaches to address the problem. We focus on the weighting method in this paper. We first propose a boosting-style algorithm to compute class weights, which is proved to have excellent theoretical property. Then we propose an adaptive algorithm, which is suitable for real-time applications. The adaptive nature of the two algorithms allows a controlled tradeoff between true positive rate and false positive rate and avoids excessive weight on the rare class, which leads to poor performance on the main class. Experiments on power grid data and some public datasets show that the proposed algorithms outperform the existing weighting and boosting methods, and that their superiority is more noticeable with noisy data.


2015 ◽  
Vol 2015 ◽  
pp. 1-12 ◽  
Author(s):  
Futai Zou ◽  
Siyu Zhang ◽  
Weixiong Rao ◽  
Ping Yi

Malware remains a major threat to nowadays Internet. In this paper, we propose a DNS graph mining-based malware detection approach. A DNS graph is composed of DNS nodes, which represent server IPs, client IPs, and queried domain names in the process of DNS resolution. After the graph construction, we next transform the problem of malware detection to the graph mining task of inferring graph nodes’ reputation scores using the belief propagation algorithm. The nodes with lower reputation scores are inferred as those infected by malwares with higher probability. For demonstration, we evaluate the proposed malware detection approach with real-world dataset. Our real-world dataset is collected from campus DNS servers for three months and we built a DNS graph consisting of 19,340,820 vertices and 24,277,564 edges. On the graph, we achieve a true positive rate 80.63% with a false positive rate 0.023%. With a false positive of 1.20%, the true positive rate was improved to 95.66%. We detected 88,592 hosts infected by malware or C&C servers, accounting for the percentage of 5.47% among all hosts. Meanwhile, 117,971 domains are considered to be related to malicious activities, accounting for 1.5% among all domains. The results indicate that our method is efficient and effective in detecting malwares.


Web use and digitized information are getting expanded each day. The measure of information created is likewise getting expanded. On the opposite side, the security assaults cause numerous security dangers in the system, sites and Internet. Interruption discovery in a fast system is extremely a hard undertaking. The Hadoop Implementation is utilized to address the previously mentioned test that is distinguishing interruption in a major information condition at constant. To characterize the strange bundle stream, AI methodologies are used. Innocent Bayes does grouping by a vector of highlight esteems produced using some limited set. Choice Tree is another Machine Learning classifier which is likewise an administered learning model. Choice tree is the stream diagram like tree structure. J48 and Naïve Bayes Algorithm are actualized in Hadoop MapReduce Framework for parallel preparing by utilizing the KDDCup Data Corrected Benchmark dataset records. The outcome acquired is 89.9% True Positive rate and 0.04% False Positive rate for Naive Bayes Algorithm and 98.06% True Positive rate and 0.001% False Positive rate for Decision Tree Algorithm.


Author(s):  
Abikoye Oluwakemi Christianah ◽  
Benjamin Aruwa Gyunka ◽  
Akande Noah Oluwatobi

<p>Android operating system has become very popular, with the highest market share, amongst all other mobile operating systems due to its open source nature and users friendliness. This has brought about an uncontrolled rise in malicious applications targeting the Android platform. Emerging trends of Android malware are employing highly sophisticated detection and analysis avoidance techniques such that the traditional signature-based detection methods have become less potent in their ability to detect new and unknown malware. Alternative approaches, such as the Machine learning techniques have taken the lead for timely zero-day anomaly detections.  The study aimed at developing an optimized Android malware detection model using ensemble learning technique. Random Forest, Support Vector Machine, and k-Nearest Neighbours were used to develop three distinct base models and their predictive results were further combined using Majority Vote combination function to produce an ensemble model. Reverse engineering procedure was employed to extract static features from large repository of malware samples and benign applications. WEKA 3.8.2 data mining suite was used to perform all the learning experiments. The results showed that Random Forest had a true positive rate of 97.9%, a false positive rate of 1.9% and was able to correctly classify instances with 98%, making it a strong base model. The ensemble model had a true positive rate of 98.1%, false positive rate of 1.8% and was able to correctly classify instances with 98.16%. The finding shows that, although the base learners had good detection results, the ensemble learner produced a better optimized detection model compared with the performances of those of the base learners.</p>


2017 ◽  
Vol 28 (1) ◽  
pp. 184-195 ◽  
Author(s):  
Hanfang Yang ◽  
Kun Lu ◽  
Xiang Lyu ◽  
Feifang Hu

Simultaneous control on true positive rate and false positive rate is of significant importance in the performance evaluation of diagnostic tests. Most of the established literature utilizes partial area under receiver operating characteristic (ROC) curve with restrictions only on false positive rate (FPR), called FPR pAUC, as a performance measure. However, its indirect control on true positive rate (TPR) is conceptually and practically misleading. In this paper, a novel and intuitive performance measure, named as two-way pAUC, is proposed, which directly quantifies partial area under ROC curve with explicit restrictions on both TPR and FPR. To estimate two-way pAUC, we devise a nonparametric estimator. Based on the estimator, a bootstrap-assisted testing method for two-way pAUC comparison is established. Moreover, to evaluate possible covariate effects on two-way pAUC, a regression analysis framework is constructed. Asymptotic normalities of the methods are provided. Advantages of the proposed methods are illustrated by simulation and Wisconsin Breast Cancer Data. We encode the methods as a publicly available R package tpAUC.


Blood ◽  
2019 ◽  
Vol 134 (Supplement_1) ◽  
pp. 4477-4477
Author(s):  
Zahra Eftekhari ◽  
Sally Mokhtari ◽  
Tushondra Thomas ◽  
Dongyun Yang ◽  
Liana Nikolaenko ◽  
...  

Sepsis contributes significantly to early treatment-related mortality after hematopoietic cell transplantation (HCT). Since the clinical presentation and characteristics of sepsis immediately after HCT can be different from that seen in general population or those who are receiving non-HCT chemotherapy, detecting early signs of sepsis in HCT recipients becomes critical. Herein, we extended our earlier analyses (Dadwal et al. ASH 2018) and evaluated a consecutive case series of 1806 patients who underwent HCT at City of Hope (2014-2017) to develop a machine-learning sepsis prediction model for HCT recipients, namely Early Sepsis Prediction/Identification for Transplant Recipients (ESPRIT) using variables within the Electronic Health Record (EHR) data. The primary clinical event was sepsis diagnosis within 100 days post-HCT, identified based on the use of the institutional "sepsis management order set" and mention of "sepsis" in the progress notes. The time of sepsis order set was considered as time of sepsis for the analyses. Data from 2014 to 2016 (108 visits with and 1315 visits without sepsis, 8% sepsis prevalence) were used as the training set and data from 2017 (24 visits with and 359 visits without sepsis, 6.6% sepsis prevalence) were kept as the holdout dataset for testing the model. From each patient visit, 61 variables were collected with a total of 862,009 lab values, 3,284,561 vital sign values and 249,982 medication orders for 1806 visits over the duration of HCT hospitalization (median: 24.1 days, range: 7-304). An ensemble of 100 random forest classification models were used to develop the prediction model. Last Observation Carried Forward (LOCF) imputation was done to attribute the missing values with the last observed value of that variable. For model development and optimization, we applied a 5-fold stratified cross validation on the training dataset. Variable importance for the 100 models was assessed using Gini mean decrease accuracy method value, which was averaged to produce the final variable importance. HCT was autologous in 798 and allogeneic in 1008 patients. Ablative conditioning regimen was delivered to 97.3% and 38.3% of patients in autologous and allogeneic groups, respectively. When the impact of "sepsis" was analyzed as a time-dependent variable, sepsis development was associated with increased mortality (HR=2.79, 95%CI: 2.14-3.64, p<0.001) by multivariable Cox regression model. Retrospective evaluation at 0, 4, 8 and 12 hours pre-sepsis showed area under the ROC curves (AUCs) of 0.98, 0.91, 0.90 and 0.85, respectively (Fig 1a), outperforming the widely used Modified Early Warning Score (MEWS) (Fig 1b). We then simulated our ESPRIT's performance in the unselected real-world data by running the model every hour from admit to sepsis or discharge, whichever occurred first. This process created an hourly risk score from admit to sepsis or discharge. ESPRIT achieved an AUC of 0.83 on the training and AUC of 0.82 on the holdout test dataset (Fig 2). An example of risk over time for a septic patient that was identified by the model with 27 hours lead time at threshold of 0.6 is shown in Fig 3. With at risk threshold of 0.6 (sensitivity: 0.4, specificity: 0.93), ESPRIT had a median lead time of 35 and 47 hours on training and holdout test data, respectively. This model allows users to select any threshold (with specific false positive/negative rate expected for a given population) to be used for specific purposes. For example, a red flag can be assigned to a patient when the risk passes the threshold of 0.6. At this threshold the false positive rate is only 7% and true positive rate is 40%. Then a yellow flag can be assigned at the threshold of 0.4, with which the model has higher (38%) false positive rate but also a high (90%) true positive rate. Using this two-step assessment/intervention system (red flag as an alarm and yellow flag as a warning sign to examine the patient to rule out sepsis), the model would achieve 90% sensitivity and 93% specificity in practice and overcome the low positive predictive value due to the rare incidence of sepsis. In summary, we developed and validated a novel machine learning monitoring system for sepsis prediction in HCT recipients. Our data strongly support further clinical validation of the ESPRIT model as a method to provide real-time sepsis predictions, and timely initiation of preemptive antibiotics therapy according to the predicted risks in the era of EHR. Disclosures Dadwal: Ansun biopharma: Research Funding; SHIRE: Research Funding; Janssen: Membership on an entity's Board of Directors or advisory committees; Merck: Membership on an entity's Board of Directors or advisory committees; Clinigen: Membership on an entity's Board of Directors or advisory committees. Nakamura:Kirin Kyowa: Other: support for an academic seminar in a university in Japan; Merck: Membership on an entity's Board of Directors or advisory committees; Celgene: Other: support for an academic seminar in a university in Japan; Alexion: Other: support to a lecture at a Japan Society of Transfusion/Cellular Therapy meeting .


Sign in / Sign up

Export Citation Format

Share Document