scholarly journals VQ-based model for binary error process

2017 ◽  
Vol 68 (3) ◽  
pp. 167-179
Author(s):  
Tibor Csóka ◽  
Jaroslav Polec ◽  
Filip Csóka ◽  
Kvetoslava Kotuliaková

AbstractA variety of complex techniques, such as forward error correction (FEC), automatic repeat request (ARQ), hybrid ARQ or cross-layer optimization, require in their design and optimization phase a realistic model of binary error process present in a specific digital channel. Past and more recent modeling approaches focus on capturing one or more stochastic characteristics with precision sufficient for the desired model application, thereby applying concepts and methods severely limiting the model applicability (egin the form of modeled process prerequisite expectations). The proposed novel concept utilizing a Vector Quantization (VQ)-based approach to binary process modeling offers a viable alternative capable of superior modeling of most commonly observed small- and large-scale stochastic characteristics of a binary error process on the digital channel. Precision of the proposed model was verified using multiple statistical distances against the data captured in a wireless sensor network logical channel trace. Furthermore, the Pearson’s goodness of fit test of all model variants’ output was performed to conclusively demonstrate usability of the model for realistic captured binary error process. Finally, the presented results prove the proposed model applicability and its ability to far surpass the capabilities of the reference Elliot’s model.

2017 ◽  
Vol 52 (3) ◽  
pp. 204-211 ◽  
Author(s):  
Manik Bansal ◽  
Indra Vir Singh ◽  
Bhanu K Mishra ◽  
Kamal Sharma ◽  
IA Khan

In this work, a strength pair model has been proposed for the numerical prediction of flexural strength probability of NBG-18 nuclear grade graphite. The input to the proposed model is a random strength pair of tensile and compressive strengths whose value is based on its probability of occurrence in the experimental data. A finite element–based deterministic numerical approach has been implemented. To account for the large difference in tensile and compressive strengths, Drucker–Prager failure criteria has been implemented. The failure envelope of the Drucker–Prager failure criteria is assumed to have uniaxial fit with Mohr–Coulomb model in the principal stress space. A total of 292 simulations with random pairs of tensile and compressive strength are performed on a three-point bend specimen to obtain a set of flexural strength data. The flexural strength data obtained through numerical simulations are fitted using normal and Weibull distributions. The flexural strength probability obtained from the proposed model is found on conservative side. A goodness-of-fit test concludes that Weibull distribution fits the numerical data better than normal distribution.


2019 ◽  
Vol 29 (7) ◽  
pp. 1787-1798
Author(s):  
Hyunkeun Ryan Cho ◽  
Seonjin Kim ◽  
Myung Hee Lee

Biomedical studies often involve an event that occurs to individuals at different times and has a significant influence on individual trajectories of response variables over time. We propose a statistical model to capture the mean trajectory alteration caused by not only the occurrence of the event but also the subject-specific time of the event. The proposed model provides a post-event mean trajectory smoothly connected with the pre-event mean trajectory by allowing the model parameters associated with the post-event mean trajectory to vary over time of the event. A goodness-of-fit test is considered to investigate how well the proposed model is fit to the data. Hypothesis tests are also developed to assess the influence of the subject-specific time of event on the mean trajectory. Theoretical and simulation studies confirm that the proposed tests choose the correctly specified model consistently and examine the effect of the subject-specific time of event successfully. The proposed model and tests are also illustrated by the analysis of two real-life data from a biomarker study for HIV patients along with their own time of treatment initiation and a body fatness study in girls with different age of menarche.


2018 ◽  
Vol 7 (3) ◽  
pp. 1558
Author(s):  
S Lakshmisridevi ◽  
R Devanathan

The application of Zipf’s law is universal not only in linguistics but also in various other areas. Mandelbrot modified Zipf law as Zipf Mandelbrot law and it is further we proposed a modification of the ZM law for modeling rank frequency- data of linguistic text. Our model generalized ZM law into a linear regression model involving arbitrary order of Zipfian rank of words in a text .The performance of the proposed model is studied for an English text and it shown to compare favorably with that of Z-M law using Chi-Square goodness of fit test. In this paper we have applied to Tamil text and its performance is also up to the mark and it is been proved by the Chi-Square test and it addresses mainly the lower ranks, we propose to extend the work to higher order ranks using LNRE model in the future. 


2020 ◽  
Vol 39 (3) ◽  
pp. 4041-4058
Author(s):  
Fang Liu ◽  
Xu Tan ◽  
Hui Yang ◽  
Hui Zhao

Intuitionistic fuzzy preference relations (IFPRs) have the natural ability to reflect the positive, the negative and the non-determinative judgements of decision makers. A decision making model is proposed by considering the inherent property of IFPRs in this study, where the main novelty comes with the introduction of the concept of additive approximate consistency. First, the consistency definitions of IFPRs are reviewed and the underlying ideas are analyzed. Second, by considering the allocation of the non-determinacy degree of decision makers’ opinions, the novel concept of approximate consistency for IFPRs is proposed. Then the additive approximate consistency of IFPRs is defined and the properties are studied. Third, the priorities of alternatives are derived from IFPRs with additive approximate consistency by considering the effects of the permutations of alternatives and the allocation of the non-determinacy degree. The rankings of alternatives based on real, interval and intuitionistic fuzzy weights are investigated, respectively. Finally, some comparisons are reported by carrying out numerical examples to show the novelty and advantage of the proposed model. It is found that the proposed model can offer various decision schemes due to the allocation of the non-determinacy degree of IFPRs.


Author(s):  
A. V. Ponomarev

Introduction: Large-scale human-computer systems involving people of various skills and motivation into the information processing process are currently used in a wide spectrum of applications. An acute problem in such systems is assessing the expected quality of each contributor; for example, in order to penalize incompetent or inaccurate ones and to promote diligent ones.Purpose: To develop a method of assessing the expected contributor’s quality in community tagging systems. This method should only use generally unreliable and incomplete information provided by contributors (with ground truth tags unknown).Results:A mathematical model is proposed for community image tagging (including the model of a contributor), along with a method of assessing the expected contributor’s quality. The method is based on comparing tag sets provided by different contributors for the same images, being a modification of pairwise comparison method with preference relation replaced by a special domination characteristic. Expected contributors’ quality is evaluated as a positive eigenvector of a pairwise domination characteristic matrix. Community tagging simulation has confirmed that the proposed method allows you to adequately estimate the expected quality of community tagging system contributors (provided that the contributors' behavior fits the proposed model).Practical relevance: The obtained results can be used in the development of systems based on coordinated efforts of community (primarily, community tagging systems). 


2020 ◽  
Author(s):  
Anusha Ampavathi ◽  
Vijaya Saradhi T

UNSTRUCTURED Big data and its approaches are generally helpful for healthcare and biomedical sectors for predicting the disease. For trivial symptoms, the difficulty is to meet the doctors at any time in the hospital. Thus, big data provides essential data regarding the diseases on the basis of the patient’s symptoms. For several medical organizations, disease prediction is important for making the best feasible health care decisions. Conversely, the conventional medical care model offers input as structured that requires more accurate and consistent prediction. This paper is planned to develop the multi-disease prediction using the improvised deep learning concept. Here, the different datasets pertain to “Diabetes, Hepatitis, lung cancer, liver tumor, heart disease, Parkinson’s disease, and Alzheimer’s disease”, from the benchmark UCI repository is gathered for conducting the experiment. The proposed model involves three phases (a) Data normalization (b) Weighted normalized feature extraction, and (c) prediction. Initially, the dataset is normalized in order to make the attribute's range at a certain level. Further, weighted feature extraction is performed, in which a weight function is multiplied with each attribute value for making large scale deviation. Here, the weight function is optimized using the combination of two meta-heuristic algorithms termed as Jaya Algorithm-based Multi-Verse Optimization algorithm (JA-MVO). The optimally extracted features are subjected to the hybrid deep learning algorithms like “Deep Belief Network (DBN) and Recurrent Neural Network (RNN)”. As a modification to hybrid deep learning architecture, the weight of both DBN and RNN is optimized using the same hybrid optimization algorithm. Further, the comparative evaluation of the proposed prediction over the existing models certifies its effectiveness through various performance measures.


Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1670
Author(s):  
Waheeb Abu-Ulbeh ◽  
Maryam Altalhi ◽  
Laith Abualigah ◽  
Abdulwahab Ali Almazroi ◽  
Putra Sumari ◽  
...  

Cyberstalking is a growing anti-social problem being transformed on a large scale and in various forms. Cyberstalking detection has become increasingly popular in recent years and has technically been investigated by many researchers. However, cyberstalking victimization, an essential part of cyberstalking, has empirically received less attention from the paper community. This paper attempts to address this gap and develop a model to understand and estimate the prevalence of cyberstalking victimization. The model of this paper is produced using routine activities and lifestyle exposure theories and includes eight hypotheses. The data of this paper is collected from the 757 respondents in Jordanian universities. This review paper utilizes a quantitative approach and uses structural equation modeling for data analysis. The results revealed a modest prevalence range is more dependent on the cyberstalking type. The results also indicated that proximity to motivated offenders, suitable targets, and digital guardians significantly influences cyberstalking victimization. The outcome from moderation hypothesis testing demonstrated that age and residence have a significant effect on cyberstalking victimization. The proposed model is an essential element for assessing cyberstalking victimization among societies, which provides a valuable understanding of the prevalence of cyberstalking victimization. This can assist the researchers and practitioners for future research in the context of cyberstalking victimization.


Author(s):  
Gábor Bergmann

AbstractStudying large-scale collaborative systems engineering projects across teams with differing intellectual property clearances, or healthcare solutions where sensitive patient data needs to be partially shared, or similar multi-user information systems over databases, all boils down to a common mathematical framework. Updateable views (lenses) and more generally bidirectional transformations are abstractions to study the challenge of exchanging information between participants with different read access privileges. The view provided to each participant must be different due to access control or other limitations, yet also consistent in a certain sense, to enable collaboration towards common goals. A collaboration system must apply bidirectional synchronization to ensure that after a participant modifies their view, the views of other participants are updated so that they are consistent again. While bidirectional transformations (synchronizations) have been extensively studied, there are new challenges that are unique to the multidirectional case. If complex consistency constraints have to be maintained, synchronizations that work fine in isolation may not compose well. We demonstrate and characterize a failure mode of the emergent behaviour, where a consistency restoration mechanism undoes the work of other participants. On the other end of the spectrum, we study the case where synchronizations work especially well together: we characterize very well-behaved multidirectional transformations, a non-trivial generalization from the bidirectional case. For the former challenge, we introduce a novel concept of controllability, while for the latter one, we propose a novel formal notion of faithful decomposition. Additionally, the paper proposes several novel properties of multidirectional transformations.


Sign in / Sign up

Export Citation Format

Share Document