scholarly journals A Secure and Stable Multicast Overlay Network with Load Balancing for Scalable IPTV Services

2012 ◽  
Vol 2012 ◽  
pp. 1-12 ◽  
Author(s):  
Tsao-Ta Wei ◽  
Chia-Hui Wang ◽  
Yu-Hsien Chu ◽  
Ray-I Chang

The emerging multimedia Internet application IPTV over P2P network preserves significant advantages in scalability. IPTV media content delivered in P2P networks over public Internet still preserves the issues of privacy and intellectual property rights. In this paper, we use SIP protocol to construct a secure application-layer multicast overlay network for IPTV, called SIPTVMON. SIPTVMON can secure all the IPTV media delivery paths against eavesdroppers via elliptic-curve Diffie-Hellman (ECDH) key exchange on SIP signaling and AES encryption. Its load-balancing overlay tree is also optimized from peer heterogeneity and churn of peer joining and leaving to minimize both service degradation and latency. The performance results from large-scale simulations and experiments on different optimization criteria demonstrate SIPTVMON's cost effectiveness in quality of privacy protection, stability from user churn, and good perceptual quality of objective PSNR values for scalable IPTV services over Internet.

Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7326
Author(s):  
Alper Kaan Sarica ◽  
Pelin Angin

The significant advances in wireless networks in the past decade have made a variety of Internet of Things (IoT) use cases possible, greatly facilitating many operations in our daily lives. IoT is only expected to grow with 5G and beyond networks, which will primarily rely on software-defined networking (SDN) and network functions virtualization for achieving the promised quality of service. The prevalence of IoT and the large attack surface that it has created calls for SDN-based intelligent security solutions that achieve real-time, automated intrusion detection and mitigation. In this paper, we propose a real-time intrusion detection and mitigation solution for SDN, which aims to provide autonomous security in the high-traffic IoT networks of the 5G and beyond era, while achieving a high degree of interpretability by human experts. The proposed approach is built upon automated flow feature extraction and classification of flows while using random forest classifiers at the SDN application layer. We present an SDN-specific dataset that we generated for IoT and provide results on the accuracy of intrusion detection in addition to performance results in the presence and absence of our proposed security mechanism. The experimental results demonstrate that the proposed security approach is promising for achieving real-time, highly accurate detection and mitigation of attacks in SDN-managed IoT networks.


Author(s):  
Yifan Yang ◽  
Yutaka Ohtake ◽  
Hiromasa Suzuki

Abstract Making arts and crafts is an essential application of 3D printing. However, typically, 3D printers have limited resolution; thus, the perceptual quality of the result is always low, mainly when the input mesh is a relief. To address this problem using existing 3D printing technology, we only operate the shape of the input triangle mesh. To improve the perceptual quality of a 3D printed product, we propose an integrated mesh processing that comprises feature extraction, 3D print preview, feature preservation test, and shape enhancement. The proposed method can identify and enlarge features that need to be enhanced without large-scale deformation. Besides, to improve ease of use, intermediate processes are visualized via user interfaces. To evaluate the proposed method, the processed triangle meshes are 3D printed. The effectiveness of the proposed approach is confirmed by comparing photographs of the original 3D prints and the enhanced 3D prints.


2019 ◽  
Vol 3 (1) ◽  
pp. 63-86 ◽  
Author(s):  
Yanan Wang ◽  
Jianqiang Li ◽  
Sun Hongbo ◽  
Yuan Li ◽  
Faheem Akhtar ◽  
...  

Purpose Simulation is a well-known technique for using computers to imitate or simulate the operations of various kinds of real-world facilities or processes. The facility or process of interest is usually called a system, and to study it scientifically, we often have to make a set of assumptions about how it works. These assumptions, which usually take the form of mathematical or logical relationships, constitute a model that is used to gain some understanding of how the corresponding system behaves, and the quality of these understandings essentially depends on the credibility of given assumptions or models, known as VV&A (verification, validation and accreditation). The main purpose of this paper is to present an in-depth theoretical review and analysis for the application of VV&A in large-scale simulations. Design/methodology/approach After summarizing the VV&A of related research studies, the standards, frameworks, techniques, methods and tools have been discussed according to the characteristics of large-scale simulations (such as crowd network simulations). Findings The contributions of this paper will be useful for both academics and practitioners for formulating VV&A in large-scale simulations (such as crowd network simulations). Originality/value This paper will help researchers to provide support of a recommendation for formulating VV&A in large-scale simulations (such as crowd network simulations).


Author(s):  
Dan Chen

The emergence of Grid technologies provide exciting new opportunities for large scale simulation over Internet, enabling collaboration and the use of distributed computing resources, while also facilitating access to geographically distributed data sets. This chapter presents HLA_Grid_RePast, a middleware platform for executing large scale collaborating RePast agent-based models on the Grid. The chapter also provides performance results and analysis on Quality of Service from a deployment of the system between UK and Singapore.


Author(s):  
Diego Goldsztajn ◽  
Sem C. Borst ◽  
Johan S. H. van Leeuwaarden ◽  
Debankur Mukherjee ◽  
Philip A. Whiting

We consider a large-scale service system where incoming tasks have to be instantaneously dispatched to one out of many parallel server pools. The user-perceived performance degrades with the number of concurrent tasks and the dispatcher aims at maximizing the overall quality of service by balancing the load through a simple threshold policy. We demonstrate that such a policy is optimal on the fluid and diffusion scales, while only involving a small communication overhead, which is crucial for large-scale deployments. In order to set the threshold optimally, it is important, however, to learn the load of the system, which may be unknown. For that purpose, we design a control rule for tuning the threshold in an online manner. We derive conditions that guarantee that this adaptive threshold settles at the optimal value, along with estimates for the time until this happens. In addition, we provide numerical experiments that support the theoretical results and further indicate that our policy copes effectively with time-varying demand patterns. Summary of Contribution: Data centers and cloud computing platforms are the digital factories of the world, and managing resources and workloads in these systems involves operations research challenges of an unprecedented scale. Due to the massive size, complex dynamics, and wide range of time scales, the design and implementation of optimal resource-allocation strategies is prohibitively demanding from a computation and communication perspective. These resource-allocation strategies are essential for certain interactive applications, for which the available computing resources need to be distributed optimally among users in order to provide the best overall experienced performance. This is the subject of the present article, which considers the problem of distributing tasks among the various server pools of a large-scale service system, with the objective of optimizing the overall quality of service provided to users. A solution to this load-balancing problem cannot rely on maintaining complete state information at the gateway of the system, since this is computationally unfeasible, due to the magnitude and complexity of modern data centers and cloud computing platforms. Therefore, we examine a computationally light load-balancing algorithm that is yet asymptotically optimal in a regime where the size of the system approaches infinity. The analysis is based on a Markovian stochastic model, which is studied through fluid and diffusion limits in the aforementioned large-scale regime. The article analyzes the load-balancing algorithm theoretically and provides numerical experiments that support and extend the theoretical results.


Author(s):  
Jian Tao ◽  
Werner Benger ◽  
Kelin Hu ◽  
Edwin Mathews ◽  
Marcel Ritter ◽  
...  

Author(s):  
Hoang Nhu Dong ◽  
Hoang Nam Nguyen ◽  
Hoang Trong Minh ◽  
Takahiko Saba

Femtocell networks have been proposed for indoor communications as the extension of cellular networks for enhancing coverage performance. Because femtocells have small coverage radius, typically from 15 to 30 meters, a femtocell user (FU) walking at low speed can still make several femtocell-to-femtocell handovers during its connection. When performing a femtocell-to-femtocell handover, femtocell selection used to select the target handover femtocell has to be able not only to reduce unnecessary handovers and but also to support FU’s quality of service (QoS). In the paper, we propose a femtocell selection scheme for femtocell-tofemtocell handover, named Mobility Prediction and Capacity Estimation based scheme (MPCE-based scheme), which has the advantages of the mobility prediction and femtocell’s available capacity estimation methods. Performance results obtained by computer simulation show that the proposed MPCE-based scheme can reduce unnecessary femtocell-tofemtocell handovers, maintain low data delay and improve the throughput of femtocell users. DOI: 10.32913/rd-ict.vol3.no14.536


Author(s):  
A. Babirad

Cerebrovascular diseases are a problem of the world today, and according to the forecast, the problem of the near future arises. The main risk factors for the development of ischemic disorders of the cerebral circulation include oblique and aging, arterial hypertension, smoking, diabetes mellitus and heart disease. An effective strategy for the prevention of cerebrovascular events is based on the implementation of large-scale risk control measures, including the use of antiagregant and anticoagulant therapy, invasive interventions such as atheromectomy, angioplasty and stenting. In this connection, the efforts of neurologists, cardiologists, angiosurgery, endocrinologists and other specialists are the basis for achieving an acceptable clinical outcome. A review of the SF-36 method for assessing the quality of life in patients with the effects of transient ischemic stroke is presented. The assessment of quality of life is recognized in world medical practice and research, an indicator that is also used to assess the quality of the health system and in general sociological research.


Author(s):  
A. V. Ponomarev

Introduction: Large-scale human-computer systems involving people of various skills and motivation into the information processing process are currently used in a wide spectrum of applications. An acute problem in such systems is assessing the expected quality of each contributor; for example, in order to penalize incompetent or inaccurate ones and to promote diligent ones.Purpose: To develop a method of assessing the expected contributor’s quality in community tagging systems. This method should only use generally unreliable and incomplete information provided by contributors (with ground truth tags unknown).Results:A mathematical model is proposed for community image tagging (including the model of a contributor), along with a method of assessing the expected contributor’s quality. The method is based on comparing tag sets provided by different contributors for the same images, being a modification of pairwise comparison method with preference relation replaced by a special domination characteristic. Expected contributors’ quality is evaluated as a positive eigenvector of a pairwise domination characteristic matrix. Community tagging simulation has confirmed that the proposed method allows you to adequately estimate the expected quality of community tagging system contributors (provided that the contributors' behavior fits the proposed model).Practical relevance: The obtained results can be used in the development of systems based on coordinated efforts of community (primarily, community tagging systems). 


2020 ◽  
Vol 103 (11) ◽  
pp. 1194-1199

Objective: To develop and validate a Thai version of the Wisconsin Quality of Life (TH WISQoL) Questionnaire. Materials and Methods: The authors developed the TH WISQoL Questionnaire based on a standard multi-step process. Subsequently, the authors recruited patients with kidney stone and requested them to complete the TH WISQoL and a validated Thai version of the 36-Item Short Form Survey (TH SF-36). The authors calculated the internal consistency and interdomain correlation of TH WISQoL and compared the convergent validity between the two instruments. Results: Thirty kidney stone patients completed the TH WISQoL and the TH SF-36. The TH WISQoL showed acceptable internal consistency for all domains (Cronbach’s alpha 0.768 to 0.909). Interdomain correlation was high for most domains (r=0.698 to 0.779), except for the correlation between Vitality and Disease domains, which showed a moderate correlation (r=0.575). For convergent validity, TH WISQoL demonstrated a good overall correlation to TH SF-36, (r=0.796, p<0.05). Conclusion: The TH WISQoL is valid and reliable for evaluating the quality of life of Thai patients with kidney stone. A further large-scale multi-center study is warranted to confirm its applicability in Thailand. Keywords: Quality of life, Kidney stone, Validation, Outcome measurement


Sign in / Sign up

Export Citation Format

Share Document