scholarly journals Adaptive Broadcasting Mechanism for Bandwidth Allocation in Mobile Services

2014 ◽  
Vol 2014 ◽  
pp. 1-14
Author(s):  
Gwo-Jiun Horng ◽  
Chi-Hsuan Wang ◽  
Chih-Lun Chou

This paper proposes a tree-based adaptive broadcasting (TAB) algorithm for data dissemination to improve data access efficiency. The proposed TAB algorithm first constructs a broadcast tree to determine the broadcast frequency of each data and splits the broadcast tree into some broadcast wood to generate the broadcast program. In addition, this paper develops an analytical model to derive the mean access latency of the generated broadcast program. In light of the derived results, both the index channel’s bandwidth and the data channel’s bandwidth can be optimally allocated to maximize bandwidth utilization. This paper presents experiments to help evaluate the effectiveness of the proposed strategy. From the experimental results, it can be seen that the proposed mechanism is feasible in practice.

2015 ◽  
Vol 23 (21) ◽  
pp. 27376 ◽  
Author(s):  
Mitradeep Sarkar ◽  
Jean-François Bryche ◽  
Julien Moreau ◽  
Mondher Besbes ◽  
Grégory Barbillon ◽  
...  

2015 ◽  
Vol 50 (5) ◽  
pp. 1-10
Author(s):  
Alen Bardizbanyan ◽  
Magnus Själander ◽  
David Whalley ◽  
Per Larsson-Edefors

1983 ◽  
Vol 105 (1) ◽  
pp. 29-33 ◽  
Author(s):  
A. M. Clausing

Cavity solar receivers are generally believed to have higher thermal efficiencies than external receivers due to reduced losses. A simple analytical model was presented by the author which indicated that the ability to heat the air inside the cavity often controls the convective loss from cavity receivers. Thus, if the receiver contains a large amount of inactive hot wall area, it can experience a large convective loss. Excellent experimental data from a variety of cavity configurations and orientations have recently become available. These data provided a means of testing and refining the analytical model. In this manuscript, a brief description of the refined model is presented. Emphasis is placed on using available experimental evidence to substantiate the hypothesized mechanisms and assumptions. Detailed comparisons are given between analytical predictions and experimental results. Excellent agreement is obtained, and the important mechanisms are more clearly delineated.


2019 ◽  
Vol 9 (13) ◽  
pp. 2684 ◽  
Author(s):  
Hongyang Li ◽  
Lizhuang Liu ◽  
Zhenqi Han ◽  
Dan Zhao

Peeling fibre is an indispensable process in the production of preserved Szechuan pickle, the accuracy of which can significantly influence the quality of the products, and thus the contour method of fibre detection, as a core algorithm of the automatic peeling device, is studied. The fibre contour is a kind of non-salient contour, characterized by big intra-class differences and small inter-class differences, meaning that the feature of the contour is not discriminative. The method called dilated-holistically-nested edge detection (Dilated-HED) is proposed to detect the fibre contour, which is built based on the HED network and dilated convolution. The experimental results for our dataset show that the Pixel Accuracy (PA) is 99.52% and the Mean Intersection over Union (MIoU) is 49.99%, achieving state-of-the-art performance.


1985 ◽  
Vol 107 (2) ◽  
pp. 188-195 ◽  
Author(s):  
S. Okabe ◽  
Y. Kamiya ◽  
K. Tsujikado ◽  
Y. Yokoyama

This paper presents the conveying velocity on a vibratory conveyor whose track is vibrated by nonsinusoidal vibration. The velocity wave form of the vibrating track is approximated by six straight lines, and five distortion factors of the wave form are defined. Considering the modes of motion of the particle, the mean conveying velocity is calculated for various conditions. Referring to these results, the optimum wave form is clarified analytically. The theoretical results show that the mean conveying velocity is considerably larger than that of ordinary feeders if the proper conveying conditions are chosen. The theoretical results are confirmed by experimental results.


2020 ◽  
Author(s):  
Ali Amir Khairbek

Standard enthalpies of hydrogenation of 29 unsaturated hydrocarbon compounds were calculated in the gas phase by CCSD(T) theory with complete basis set cc-pVXZ, where X = DZ, TZ, as well as by complete basis set limit extrapolation. Geometries of reactants and products were optimized at the M06-2X/6-31g(d) level. This M06-2X geometries were used in the CCSD(T)/cc-pVXZ//M06-2X/6-31g(d) and cc-pV(DT)Z extrapolation calculations. (MAD) the mean absolute deviations of the enthalpies of hydrogenation between the calculated and experimental results that range from 8.8 to 3.4 kJ mol−1 based on the Comparison between the calculation at CCSD(T) and experimental results. The MAD value has improved and decreased to 1.5 kJ mol−1 after using complete basis set limit extrapolation. The deviations of the experimental values are located inside the “chemical accuracy” (±1 kcal mol−1 ≈ ±4.2 kJ mol−1) as some results showed. A very good linear correlations between experimental and calculated enthalpies of hydro-genation have been obtained at CCSD(T)/cc-pVTZ//M06-2X/6-31g(d) level and CCSD(T)/cc-pV(DT)Z extrapolation levels (SD =2.11 and 2.12 kJ mol−1, respectively).


2019 ◽  
pp. 254-277 ◽  
Author(s):  
Ying Zhang ◽  
Chaopeng Li ◽  
Na Chen ◽  
Shaowen Liu ◽  
Liming Du ◽  
...  

Since large amount of geospatial data are produced by various sources, geospatial data integration is difficult because of the shortage of semantics. Despite standardised data format and data access protocols, such as Web Feature Service (WFS), can enable end-users with access to heterogeneous data stored in different formats from various sources, it is still time-consuming and ineffective due to the lack of semantics. To solve this problem, a prototype to implement the geospatial data integration is proposed by addressing the following four problems, i.e., geospatial data retrieving, modeling, linking and integrating. We mainly adopt four kinds of geospatial data sources to evaluate the performance of the proposed approach. The experimental results illustrate that the proposed linking method can get high performance in generating the matched candidate record pairs in terms of Reduction Ratio(RR), Pairs Completeness(PC), Pairs Quality(PQ) and F-score. The integrating results denote that each data source can get much Complementary Completeness(CC) and Increased Completeness(IC).


Author(s):  
Ying Zhang ◽  
Chaopeng Li ◽  
Na Chen ◽  
Shaowen Liu ◽  
Liming Du ◽  
...  

Since large amount of geospatial data are produced by various sources, geospatial data integration is difficult because of the shortage of semantics. Despite standardised data format and data access protocols, such as Web Feature Service (WFS), can enable end-users with access to heterogeneous data stored in different formats from various sources, it is still time-consuming and ineffective due to the lack of semantics. To solve this problem, a prototype to implement the geospatial data integration is proposed by addressing the following four problems, i.e., geospatial data retrieving, modeling, linking and integrating. We mainly adopt four kinds of geospatial data sources to evaluate the performance of the proposed approach. The experimental results illustrate that the proposed linking method can get high performance in generating the matched candidate record pairs in terms of Reduction Ratio(RR), Pairs Completeness(PC), Pairs Quality(PQ) and F-score. The integrating results denote that each data source can get much Complementary Completeness(CC) and Increased Completeness(IC).


Author(s):  
Ghalem Belalem

Data grids have become an interesting and popular domain in grid community (Foster and Kesselmann, 2004). Generally, the grids are proposed as solutions for large scale systems, where data replication is a well-known technique used to reduce access latency and bandwidth, and increase availability. In splitting of the advantages of replication, there are many problems that should be solved such as, • The replica placement that determines the optimal locations of replicated data in order to reduce the storage cost and data access (Xu et al, 2002); • The problem of determining which replica will be accessed to in terms of consistency when we need to execute a read or write operation (Ranganathan and Foster, 2001); • The problem of degree of replication which consists in finding a minimal number of replicas without reducing the performance of user applications; • The problem of replica consistency that concerns the consistency of a set of replicated data. This consistency provides a completely coherent view of all the replicas for a user (Gray et al 1996). Our principal aim, in this article, is to integrate into consistency management service, an approach based on an economic model for resolving conflicts detected in the data grid.


Sign in / Sign up

Export Citation Format

Share Document