scholarly journals A Methodology for Network Analysis to Improve the Cyber-Physicals Communications in Next-Generation Networks

Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2247
Author(s):  
David Cortés-Polo ◽  
Luis Ignacio Jimenez Gil ◽  
José-Luis González-Sánchez ◽  
Jesús Calle-Cancho

Cyber-physical systems allow creating new applications and services which will bring people, data, processes, and things together. The network is the backbone that interconnects this new paradigm, especially 5G networks that will expand the coverage, reduce the latency, and enhance the data rate. In this sense, network analytics will increase the knowledge about the network and its interconnected devices, being a key feature especially with the increment in the number of physical things (sensors, actuators, smartphones, tablets, and so on). With this increment, the usage of online networking services and applications will grow, and network operators require to detect and analyze all issues related to the network. In this article, a methodology to analyze real network information provided by a network operator and acquire knowledge of the communications is presented. Various real data sets, provided by Telecom Italia, are analyzed to compare two different zones: one located in the urban area of Milan, Italy, and its surroundings, and the second in the province of Trento, Italy. These data sets describe different areas and shapes that cover a metropolitan area in the first case and a mainly rural area in the second case, which implies that these areas will have different comportments. To compare these comportments and group them in a single cluster set, a new technique is presented in this paper to establish a relationship between them and reduce those that could be similar.

2012 ◽  
pp. 163-186
Author(s):  
Jirí Krupka ◽  
Miloslava Kašparová ◽  
Pavel Jirava ◽  
Jan Mandys

The chapter presents the problem of quality of life modeling in the Czech Republic based on classification methods. It concerns a comparison of methodological approaches; in the first case the approach of the Institute of Sociology of the Academy of Sciences of the Czech Republic was used, the second case is concerning a project of the civic association Team Initiative for Local Sustainable Development. On the basis of real data sets from the institute and team initiative the authors synthesized and analyzed quality of life classification models. They used decision tree classification algorithms for generating transparent decision rules and compare the classification results of decision tree. The classifier models on the basis of C5.0, CHAID, C&RT and C5.0 boosting algorithms were proposed and analyzed. The designed classification model was created in Clementine.


Symmetry ◽  
2021 ◽  
Vol 13 (3) ◽  
pp. 412 ◽  
Author(s):  
Hadeel S. Klakattawi ◽  
Wedad H. Aljuhani

In the following article, a new five-parameter distribution, the alpha power exponentiated Weibull-exponential distribution is proposed, based on a newly developed technique. It is of particular interest because the density of this distribution can take various symmetric and asymmetric possible shapes. Moreover, its related hazard function is tractable and showing a great diversity of asymmetrical shaped, including increasing, decreasing, near symmetrical, increasing-decreasing-increasing, increasing-constant-increasing, J-shaped, and reversed J-shaped. Some properties relating to the proposed distribution are provided. The inferential method of maximum likelihood is employed, in order to estimate the model’s unknown parameters, and these estimates are evaluated based on various simulation studies. Moreover, the usefulness of the model is investigated through its application to three real data sets. The results show that the proposed distribution can, in fact, better fit the data, when compared to other competing distributions.


Sensors ◽  
2021 ◽  
Vol 21 (20) ◽  
pp. 6807
Author(s):  
Yong Xie ◽  
Yili Guo ◽  
Sheng Yang ◽  
Jian Zhou ◽  
Xiaobai Chen

The introduction of various networks into automotive cyber-physical systems (ACPS) brings great challenges on security protection of ACPS functions, the auto industry recommends to adopt the hardware security module (HSM)-based multicore ECU to secure in-vehicle networks while meeting the delay constraint. However, this approach incurs significant hardware cost. Consequently, this paper aims to reduce security enhancing-related hardware cost by proposing two efficient design space exploration (DSE) algorithms, namely, stepwise decreasing-based heuristic algorithm (SDH) and interference balancing-based heuristic algorithm (IBH), which explore the task assignment, task scheduling, and message scheduling to minimize the number of required HSMs. Experiments on both synthetical and real data sets show that the proposed SDH and IBH are superior than state-of-the-art algorithm, and the advantage of SDH and IBH becomes more obvious as the increase about the percentage of security-critical tasks. For synthetic data sets, the hardware cost can be reduced by 61.4% and 45.6% averagely for IBH and SDH, respectively; for real data sets, the hardware cost can be reduced by 64.3% and 54.4% on average for IBH and SDH, respectively. Furthermore, IBH is better than SDH in most cases, and the runtime of IBH is two or three orders of magnitude smaller than SDH and state-of-the-art algorithm.


Polar Record ◽  
2007 ◽  
Vol 43 (4) ◽  
pp. 331-343 ◽  
Author(s):  
Franz J. Meyer

ABSTRACTThis paper describes a new technique simultaneously to estimate topography and motion of polar glaciers from multi-temporal SAR interferograms. The approach is based on a combination of several SAR interferograms in a least-squares adjustment using the Gauss-Markov model. For connecting the multi-temporal data sets, a spatio-temporal model is proposed that describes the properties of the surface and its temporal evolution. Rigorous mathematical modelling of functional and stochastic relations allows for a systematic description of the processing chain. It is also an optimal tool to set parameters for the statistics of every individual processing step, and the propagation of errors into the results. Within the paper theoretical standard deviations of the unknowns are calculated depending on the configuration of the data sets. The influence of gross errors in the observations and the effect of non-modelled error sources on the unknowns are estimated. A validation of the approach based on real data concludes the paper.


Author(s):  
Jirí Krupka ◽  
Miloslava Kašparová ◽  
Pavel Jirava ◽  
Jan Mandys

The chapter presents the problem of quality of life modeling in the Czech Republic based on classification methods. It concerns a comparison of methodological approaches; in the first case the approach of the Institute of Sociology of the Academy of Sciences of the Czech Republic was used, the second case is concerning a project of the civic association Team Initiative for Local Sustainable Development. On the basis of real data sets from the institute and team initiative the authors synthesized and analyzed quality of life classification models. They used decision tree classification algorithms for generating transparent decision rules and compare the classification results of decision tree. The classifier models on the basis of C5.0, CHAID, C&RT and C5.0 boosting algorithms were proposed and analyzed. The designed classification model was created in Clementine.


2021 ◽  
Author(s):  
Jakob Raymaekers ◽  
Peter J. Rousseeuw

AbstractMany real data sets contain numerical features (variables) whose distribution is far from normal (Gaussian). Instead, their distribution is often skewed. In order to handle such data it is customary to preprocess the variables to make them more normal. The Box–Cox and Yeo–Johnson transformations are well-known tools for this. However, the standard maximum likelihood estimator of their transformation parameter is highly sensitive to outliers, and will often try to move outliers inward at the expense of the normality of the central part of the data. We propose a modification of these transformations as well as an estimator of the transformation parameter that is robust to outliers, so the transformed data can be approximately normal in the center and a few outliers may deviate from it. It compares favorably to existing techniques in an extensive simulation study and on real data.


Entropy ◽  
2020 ◽  
Vol 23 (1) ◽  
pp. 62
Author(s):  
Zhengwei Liu ◽  
Fukang Zhu

The thinning operators play an important role in the analysis of integer-valued autoregressive models, and the most widely used is the binomial thinning. Inspired by the theory about extended Pascal triangles, a new thinning operator named extended binomial is introduced, which is a general case of the binomial thinning. Compared to the binomial thinning operator, the extended binomial thinning operator has two parameters and is more flexible in modeling. Based on the proposed operator, a new integer-valued autoregressive model is introduced, which can accurately and flexibly capture the dispersed features of counting time series. Two-step conditional least squares (CLS) estimation is investigated for the innovation-free case and the conditional maximum likelihood estimation is also discussed. We have also obtained the asymptotic property of the two-step CLS estimator. Finally, three overdispersed or underdispersed real data sets are considered to illustrate a superior performance of the proposed model.


Econometrics ◽  
2021 ◽  
Vol 9 (1) ◽  
pp. 10
Author(s):  
Šárka Hudecová ◽  
Marie Hušková ◽  
Simos G. Meintanis

This article considers goodness-of-fit tests for bivariate INAR and bivariate Poisson autoregression models. The test statistics are based on an L2-type distance between two estimators of the probability generating function of the observations: one being entirely nonparametric and the second one being semiparametric computed under the corresponding null hypothesis. The asymptotic distribution of the proposed tests statistics both under the null hypotheses as well as under alternatives is derived and consistency is proved. The case of testing bivariate generalized Poisson autoregression and extension of the methods to dimension higher than two are also discussed. The finite-sample performance of a parametric bootstrap version of the tests is illustrated via a series of Monte Carlo experiments. The article concludes with applications on real data sets and discussion.


Information ◽  
2021 ◽  
Vol 12 (5) ◽  
pp. 202
Author(s):  
Louai Alarabi ◽  
Saleh Basalamah ◽  
Abdeltawab Hendawi ◽  
Mohammed Abdalla

The rapid spread of infectious diseases is a major public health problem. Recent developments in fighting these diseases have heightened the need for a contact tracing process. Contact tracing can be considered an ideal method for controlling the transmission of infectious diseases. The result of the contact tracing process is performing diagnostic tests, treating for suspected cases or self-isolation, and then treating for infected persons; this eventually results in limiting the spread of diseases. This paper proposes a technique named TraceAll that traces all contacts exposed to the infected patient and produces a list of these contacts to be considered potentially infected patients. Initially, it considers the infected patient as the querying user and starts to fetch the contacts exposed to him. Secondly, it obtains all the trajectories that belong to the objects moved nearby the querying user. Next, it investigates these trajectories by considering the social distance and exposure period to identify if these objects have become infected or not. The experimental evaluation of the proposed technique with real data sets illustrates the effectiveness of this solution. Comparative analysis experiments confirm that TraceAll outperforms baseline methods by 40% regarding the efficiency of answering contact tracing queries.


Symmetry ◽  
2021 ◽  
Vol 13 (3) ◽  
pp. 474
Author(s):  
Abdulhakim A. Al-Babtain ◽  
Ibrahim Elbatal ◽  
Hazem Al-Mofleh ◽  
Ahmed M. Gemeay ◽  
Ahmed Z. Afify ◽  
...  

In this paper, we introduce a new flexible generator of continuous distributions called the transmuted Burr X-G (TBX-G) family to extend and increase the flexibility of the Burr X generator. The general statistical properties of the TBX-G family are calculated. One special sub-model, TBX-exponential distribution, is studied in detail. We discuss eight estimation approaches to estimating the TBX-exponential parameters, and numerical simulations are conducted to compare the suggested approaches based on partial and overall ranks. Based on our study, the Anderson–Darling estimators are recommended to estimate the TBX-exponential parameters. Using two skewed real data sets from the engineering sciences, we illustrate the importance and flexibility of the TBX-exponential model compared with other existing competing distributions.


Sign in / Sign up

Export Citation Format

Share Document