scholarly journals Machine Learning Based Flexible Transmission Time Interval Scheduling for eMBB and uRLLC Coexistence Scenario

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 65811-65820 ◽  
Author(s):  
Jingxuan Zhang ◽  
Xiaodong Xu ◽  
Kangjie Zhang ◽  
Bufang Zhang ◽  
Xiaofeng Tao ◽  
...  
Geosciences ◽  
2019 ◽  
Vol 9 (7) ◽  
pp. 308 ◽  
Author(s):  
Valeri G. Gitis ◽  
Alexander B. Derendyaev

In this paper, we suggest two machine learning methods for seismic hazard forecast. The first method is used for spatial forecasting of maximum possible earthquake magnitudes ( M m a x ), whereas the second is used for spatio-temporal forecasting of strong earthquakes. The first method, the method of approximation of interval expert estimates, is based on a regression approach in which values of M m a x at the points of the training sample are estimated by experts. The method allows one to formalize the knowledge of experts, to find the dependence of M m a x on the properties of the geological environment, and to construct a map of the spatial forecast. The second method, the method of minimum area of alarm, uses retrospective data to identify the alarm area in which the epicenters of strong (target) earthquakes are expected at a certain time interval. This method is the basis of an automatic web-based platform that systematically forecasts target earthquakes. The results of testing the approach to earthquake prediction in the Mediterranean and Californian regions are presented. For the tests, well known parameters of earthquake catalogs were used. The method showed a satisfactory forecast quality.


2019 ◽  
Vol 9 (23) ◽  
pp. 5003 ◽  
Author(s):  
Francesco Zola ◽  
Jan Lukas Bruse ◽  
Maria Eguimendia ◽  
Mikel Galar ◽  
Raul Orduna Urrutia

The Bitcoin network not only is vulnerable to cyber-attacks but currently represents the most frequently used cryptocurrency for concealing illicit activities. Typically, Bitcoin activity is monitored by decreasing anonymity of its entities using machine learning-based techniques, which consider the whole blockchain. This entails two issues: first, it increases the complexity of the analysis requiring higher efforts and, second, it may hide network micro-dynamics important for detecting short-term changes in entity behavioral patterns. The aim of this paper is to address both issues by performing a “temporal dissection” of the Bitcoin blockchain, i.e., dividing it into smaller temporal batches to achieve entity classification. The idea is that a machine learning model trained on a certain time-interval (batch) should achieve good classification performance when tested on another batch if entity behavioral patterns are similar. We apply cascading machine learning principles—a type of ensemble learning applying stacking techniques—introducing a “k-fold cross-testing” concept across batches of varying size. Results show that blockchain batch size used for entity classification could be reduced for certain classes (Exchange, Gambling, and eWallet) as classification rates did not vary significantly with batch size; suggesting that behavioral patterns did not change significantly over time. Mixer and Market class detection, however, can be negatively affected. A deeper analysis of Mining Pool behavior showed that models trained on recent data perform better than models trained on older data, suggesting that “typical” Mining Pool behavior may be represented better by recent data. This work provides a first step towards uncovering entity behavioral changes via temporal dissection of blockchain data.


2021 ◽  
Author(s):  
Carolina H Chung ◽  
Sriram Chandrasekaran

Drug combinations are a promising strategy to counter antibiotic resistance. However, current experimental and computational approaches do not account for the entire complexity involved in combination therapy design, such as the effect of the growth environment, drug order, and time interval. To address these limitations, we present an approach that uses genome-scale metabolic modeling and machine learning to explain and guide combination therapy design. Our approach (a) accommodates diverse data types, (b) accurately predicts drug interactions in various growth conditions, (c) accounts for time- and order-specific interactions, and (d) identifies mechanistic factors driving drug interactions. The entropy in bacterial stress response, time between treatments, and gluconeogenesis activation were the most predictive features of combination therapy outcomes across time scales and growth conditions. Analysis of the vast landscape of condition-specific drug interactions revealed promising new drug combinations and a tradeoff in the efficacy between simultaneous and sequential combination therapies.


Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5929
Author(s):  
Sikandar Zulqarnain Khan ◽  
Yannick Le Moullec ◽  
Muhammad Mahtab Alam

Machine Learning (ML) techniques can play a pivotal role in energy efficient IoT networks by reducing the unnecessary data from transmission. With such an aim, this work combines a low-power, yet computationally capable processing unit, with an NB-IoT radio into a smart gateway that can run ML algorithms to smart transmit visual data over the NB-IoT network. The proposed smart gateway utilizes supervised and unsupervised ML algorithms to optimize the visual data in terms of their size and quality before being transmitted over the air. This relaxes the channel occupancy from an individual NB-IoT radio, reduces its energy consumption and also minimizes the transmission time of data. Our on-field results indicate up to 93% reductions in the number of NB-IoT radio transmissions, up to 90.5% reductions in the NB-IoT radio energy consumption and up to 90% reductions in the data transmission time.


in modeling of complex systems, manual creation and maintenance of the appropriate behavior is found to be the key problem. Behavior modeling using machine learning has found successful in modeling and simulation. This paper presents artificial neural network (ANN) modeling of transmission line carrying frequency varying signal using machine learning. This work uses proper orthogonal decomposition (POD) based reduced order modeling. In this proposed work, snapshot sets of complex mathematical model of nonlinear transmission line and also linear model are obtained at different time interval. These snapshot sets are arranged in matrix form separately for nonlinear and linear models. POD method is applied on both the matrices separately. This reduces the order of the matrix which is used as input and output data set for neural network training through machine learning technique. Trained neural network model has been verified using different untrained data set. The proposed algorithm determines the dimension of the interpolation space prompting a considerable decrease in the computational expense. The proposed algorithm doesn't force any imperatives on the topology of the appropriate circuit or kind of the nonlinear segments and hence relevant to general nonlinear systems.


Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 203
Author(s):  
Maha M. Alshammari ◽  
Afnan Almuhanna ◽  
Jamal Alhiyafi

A tumor is an abnormal tissue classified as either benign or malignant. A breast tumor is one of the most common tumors in women. Radiologists use mammograms to identify a breast tumor and classify it, which is a time-consuming process and prone to error due to the complexity of the tumor. In this study, we applied machine learning-based techniques to assist the radiologist in reading mammogram images and classifying the tumor in a very reasonable time interval. We extracted several features from the region of interest in the mammogram, which the radiologist manually annotated. These features are incorporated into a classification engine to train and build the proposed structure classification models. We used a dataset that was not previously seen in the model to evaluate the accuracy of the proposed system following the standard model evaluation schemes. Accordingly, this study found that various factors could affect the performance, which we avoided after experimenting all the possible ways. This study finally recommends using the optimized Support Vector Machine or Naïve Bayes, which produced 100% accuracy after integrating the feature selection and hyper-parameter optimization schemes.


Author(s):  
Yu.E. Kuvayskova ◽  

To ensure the reliable functioning of a technical object, it is necessary to predict its state for the upcoming time interval. Let the technical state of the object be characterized at a certain point in time by a set of parameters established by the technical documentation for the object. It is assumed that for certain values of these parameters, the object may be in a good or faulty state. It is required by the values of these parameters to estimate the state of the object in the upcoming time interval. Supervised machine learning methods can be applied to solve this problem. However, to obtain good results in predicting the state of an object, it is necessary to choose the correct training model. One of the disadvantages of machine learning models is high bias and too much scatter. In this paper, to reduce the scatter of the model, it is proposed to use ensemble machine learning methods, namely, the bagging procedure. The main idea of the ensemble of methods is that with the right combination of weak models, more accurate and robust models can be obtained. The purpose of bagging is to create an ensemble model that is more reliable than the individual models that make up it. One of the big advantages of bagging is its concurrency, since different ensemble models are trained independently of each other. The effectiveness of the proposed approach is shown by the example of predicting the technical state of an object by eight parameters of its functioning. To assess the effectiveness of the application of ensemble machine learning methods for predicting the technical state of an object, the quality criteria of binary classification are used: accuracy, completeness, and F-measure. It is shown that the use of ensemble machine learning methods can improve the accuracy of predicting the state of a technical object by 4% –9% in comparison with basic machine learning methods. This approach can be used by specialists to predict the technical condition of objects in many technical applications, in particular, in aviation.


Author(s):  
Sathish Babu B. ◽  
K. Bhargavi ◽  
K. N. Subramanya

The advent of quantum computing is bringing threats to successful operations of classical cryptographic techniques. To conduct quantum key distribution (QKD) in a finite time interval, there is a need to estimate photon states and analyze the fluctuations statistically. The use of brute force and local search methods for parameter optimization are computationally intensive and becomes an infeasible solution even for smaller connections. Therefore, the use of quantum machine learning models with self-learning ability is useful in predicting the optimal parameters for quantum key distribution. This chapter discusses some of the quantum machine learning models with their architecture, advantages, and disadvantages. The performance of quantum convoluted neural network (QCNN) and Quantum Particle Swarm Optimization (QPSO) towards QKD is found to be good compared to all the other quantum machine learning models discussed.


Sign in / Sign up

Export Citation Format

Share Document