scholarly journals A Dynamic Plane Prediction Method Using the Extended Frame in Smart Dust IoT Environments

Sensors ◽  
2020 ◽  
Vol 20 (5) ◽  
pp. 1364 ◽  
Author(s):  
Joonsuu Park ◽  
KeeHyun Park

Internet of Things (IoT) technologies are undeniably already all around us, as we stand at the cusp of the next generation of IoT technologies. Indeed, the next-generation of IoT technologies are evolving before IoT technologies have been fully adopted, and smart dust IoT technology is one such example. The concept of smart dust IoT technology, which features very small devices with low computing power, is a revolutionary and innovative concept that enables many things that were previously unimaginable, but at the same time creates unresolved problems. One of the biggest problems is the bottlenecks in data transmission that can be caused by this large number of devices. The bottleneck problem was solved with the Dual Plane Development Kit (DPDK) architecture. However, the DPDK solution created an unexpected new problem, which is called the mixed packet problem. The mixed packet problem, which occurs when a large number of data packets and control packets mix and change at a rapid rate, can slow a system significantly. In this paper, we propose a dynamic partitioning algorithm that solves the mixed packet problem by physically separating the planes and using a learning algorithm to determine the ratio of separated planes. In addition, we propose a training data model eXtended Permuted Frame (XPF) that innovatively increases the number of training data to reflect the packet characteristics of the system. By solving the mixed packet problem in this way, it was found that the proposed dynamic partitioning algorithm performed about 72% better than the general DPDK environment, and 88% closer to the ideal environment.

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Daniel J. Gauthier ◽  
Erik Bollt ◽  
Aaron Griffith ◽  
Wendson A. S. Barbosa

AbstractReservoir computing is a best-in-class machine learning algorithm for processing information generated by dynamical systems using observed time-series data. Importantly, it requires very small training data sets, uses linear optimization, and thus requires minimal computing resources. However, the algorithm uses randomly sampled matrices to define the underlying recurrent neural network and has a multitude of metaparameters that must be optimized. Recent results demonstrate the equivalence of reservoir computing to nonlinear vector autoregression, which requires no random matrices, fewer metaparameters, and provides interpretable results. Here, we demonstrate that nonlinear vector autoregression excels at reservoir computing benchmark tasks and requires even shorter training data sets and training time, heralding the next generation of reservoir computing.


Author(s):  
Qian Guo ◽  
Mo Li ◽  
Chunhui Wang ◽  
Peihong Wang ◽  
Zhencheng Fang ◽  
...  

AbstractThe recent outbreak of pneumonia in Wuhan, China caused by the 2019 Novel Coronavirus (2019-nCoV) emphasizes the importance of detecting novel viruses and predicting their risks of infecting people. In this report, we introduced the VHP (Virus Host Prediction) to predict the potential hosts of viruses using deep learning algorithm. Our prediction suggests that 2019-nCoV has close infectivity with other human coronaviruses, especially the severe acute respiratory syndrome coronavirus (SARS-CoV), Bat SARS-like Coronaviruses and the Middle East respiratory syndrome coronavirus (MERS-CoV). Based on our prediction, compared to the Coronaviruses infecting other vertebrates, bat coronaviruses are assigned with more similar infectivity patterns with 2019-nCoVs. Furthermore, by comparing the infectivity patterns of all viruses hosted on vertebrates, we found mink viruses show a closer infectivity pattern to 2019-nCov. These consequences of infectivity pattern analysis illustrate that bat and mink may be two candidate reservoirs of 2019-nCov.These results warn us to beware of 2019-nCoV and guide us to further explore the properties and reservoir of it.One Sentence SummaryIt is of great value to identify whether a newly discovered virus has the risk of infecting human. Guo et al. proposed a virus host prediction method based on deep learning to detect what kind of host a virus can infect with DNA sequence as input. Applied to the Wuhan 2019 Novel Coronavirus, our prediction demonstrated that several vertebrate-infectious coronaviruses have strong potential to infect human. This method will be helpful in future viral analysis and early prevention and control of viral pathogens.


2019 ◽  
Author(s):  
Andrew Medford ◽  
Shengchun Yang ◽  
Fuzhu Liu

Understanding the interaction of multiple types of adsorbate molecules on solid surfaces is crucial to establishing the stability of catalysts under various chemical environments. Computational studies on the high coverage and mixed coverages of reaction intermediates are still challenging, especially for transition-metal compounds. In this work, we present a framework to predict differential adsorption energies and identify low-energy structures under high- and mixed-adsorbate coverages on oxide materials. The approach uses Gaussian process machine-learning models with quantified uncertainty in conjunction with an iterative training algorithm to actively identify the training set. The framework is demonstrated for the mixed adsorption of CH<sub>x</sub>, NH<sub>x</sub> and OH<sub>x</sub> species on the oxygen vacancy and pristine rutile TiO<sub>2</sub>(110) surface sites. The results indicate that the proposed algorithm is highly efficient at identifying the most valuable training data, and is able to predict differential adsorption energies with a mean absolute error of ~0.3 eV based on <25% of the total DFT data. The algorithm is also used to identify 76% of the low-energy structures based on <30% of the total DFT data, enabling construction of surface phase diagrams that account for high and mixed coverage as a function of the chemical potential of C, H, O, and N. Furthermore, the computational scaling indicates the algorithm scales nearly linearly (N<sup>1.12</sup>) as the number of adsorbates increases. This framework can be directly extended to metals, metal oxides, and other materials, providing a practical route toward the investigation of the behavior of catalysts under high-coverage conditions.


Genetics ◽  
2021 ◽  
Author(s):  
Marco Lopez-Cruz ◽  
Gustavo de los Campos

Abstract Genomic prediction uses DNA sequences and phenotypes to predict genetic values. In homogeneous populations, theory indicates that the accuracy of genomic prediction increases with sample size. However, differences in allele frequencies and in linkage disequilibrium patterns can lead to heterogeneity in SNP effects. In this context, calibrating genomic predictions using a large, potentially heterogeneous, training data set may not lead to optimal prediction accuracy. Some studies tried to address this sample size/homogeneity trade-off using training set optimization algorithms; however, this approach assumes that a single training data set is optimum for all individuals in the prediction set. Here, we propose an approach that identifies, for each individual in the prediction set, a subset from the training data (i.e., a set of support points) from which predictions are derived. The methodology that we propose is a Sparse Selection Index (SSI) that integrates Selection Index methodology with sparsity-inducing techniques commonly used for high-dimensional regression. The sparsity of the resulting index is controlled by a regularization parameter (λ); the G-BLUP (the prediction method most commonly used in plant and animal breeding) appears as a special case which happens when λ = 0. In this study, we present the methodology and demonstrate (using two wheat data sets with phenotypes collected in ten different environments) that the SSI can achieve significant (anywhere between 5-10%) gains in prediction accuracy relative to the G-BLUP.


Photonics ◽  
2020 ◽  
Vol 8 (1) ◽  
pp. 3
Author(s):  
Shun Qin ◽  
Wai Kin Chan

Accurate segmented mirror wavefront sensing and control is essential for next-generation large aperture telescope system design. In this paper, a direct tip–tilt and piston error detection technique based on model-based phase retrieval with multiple defocused images is proposed for segmented mirror wavefront sensing. In our technique, the tip–tilt and piston error are represented by a basis consisting of three basic plane functions with respect to the x, y, and z axis so that they can be parameterized by the coefficients of these bases; the coefficients then are solved by a non-linear optimization method with the defocus multi-images. Simulation results show that the proposed technique is capable of measuring high dynamic range wavefront error reaching 7λ, while resulting in high detection accuracy. The algorithm is demonstrated as robust to noise by introducing phase parameterization. In comparison, the proposed tip–tilt and piston error detection approach is much easier to implement than many existing methods, which usually introduce extra sensors and devices, as it is a technique based on multiple images. These characteristics make it promising for the application of wavefront sensing and control in next-generation large aperture telescopes.


Entropy ◽  
2021 ◽  
Vol 23 (1) ◽  
pp. 126
Author(s):  
Sharu Theresa Jose ◽  
Osvaldo Simeone

Meta-learning, or “learning to learn”, refers to techniques that infer an inductive bias from data corresponding to multiple related tasks with the goal of improving the sample efficiency for new, previously unobserved, tasks. A key performance measure for meta-learning is the meta-generalization gap, that is, the difference between the average loss measured on the meta-training data and on a new, randomly selected task. This paper presents novel information-theoretic upper bounds on the meta-generalization gap. Two broad classes of meta-learning algorithms are considered that use either separate within-task training and test sets, like model agnostic meta-learning (MAML), or joint within-task training and test sets, like reptile. Extending the existing work for conventional learning, an upper bound on the meta-generalization gap is derived for the former class that depends on the mutual information (MI) between the output of the meta-learning algorithm and its input meta-training data. For the latter, the derived bound includes an additional MI between the output of the per-task learning procedure and corresponding data set to capture within-task uncertainty. Tighter bounds are then developed for the two classes via novel individual task MI (ITMI) bounds. Applications of the derived bounds are finally discussed, including a broad class of noisy iterative algorithms for meta-learning.


Mathematics ◽  
2021 ◽  
Vol 9 (8) ◽  
pp. 830
Author(s):  
Seokho Kang

k-nearest neighbor (kNN) is a widely used learning algorithm for supervised learning tasks. In practice, the main challenge when using kNN is its high sensitivity to its hyperparameter setting, including the number of nearest neighbors k, the distance function, and the weighting function. To improve the robustness to hyperparameters, this study presents a novel kNN learning method based on a graph neural network, named kNNGNN. Given training data, the method learns a task-specific kNN rule in an end-to-end fashion by means of a graph neural network that takes the kNN graph of an instance to predict the label of the instance. The distance and weighting functions are implicitly embedded within the graph neural network. For a query instance, the prediction is obtained by performing a kNN search from the training data to create a kNN graph and passing it through the graph neural network. The effectiveness of the proposed method is demonstrated using various benchmark datasets for classification and regression tasks.


Sign in / Sign up

Export Citation Format

Share Document