scholarly journals Novel One Item Network Coding Vectors

Author(s):  
Anas AbuDaqa ◽  
Ashraf Mahm

<div>This paper presents a novel approach that greatly reduces the network coding coefficients overhead to a very tiny value that does not exceed 8 bytes. Consequently, other performance metrics, e.g., download time and throughput, are improved.</div><div><br></div>

2021 ◽  
Author(s):  
Anas AbuDaqa ◽  
Ashraf Mahm

<div>This paper presents a novel approach that greatly reduces the network coding coefficients overhead to a very tiny value that does not exceed 8 bytes. Consequently, other performance metrics, e.g., download time and throughput, are improved.</div><div><br></div>


2016 ◽  
Vol 145 (5) ◽  
pp. 925-941 ◽  
Author(s):  
G. MURPHY ◽  
C. D. PILCHER ◽  
S. M. KEATING ◽  
R. KASSANJEE ◽  
S. N. FACENTE ◽  
...  

SUMMARYIn 2011 the Incidence Assay Critical Path Working Group reviewed the current state of HIV incidence assays and helped to determine a critical path to the introduction of an HIV incidence assay. At that time the Consortium for Evaluation and Performance of HIV Incidence Assays (CEPHIA) was formed to spur progress and raise standards among assay developers, scientists and laboratories involved in HIV incidence measurement and to structure and conduct a direct independent comparative evaluation of the performance of 10 existing HIV incidence assays, to be considered singly and in combinations as recent infection test algorithms. In this paper we report on a new framework for HIV incidence assay evaluation that has emerged from this effort over the past 5 years, which includes a preliminary target product profile for an incidence assay, a consensus around key performance metrics along with analytical tools and deployment of a standardized approach for incidence assay evaluation. The specimen panels for this evaluation have been collected in large volumes, characterized using a novel approach for infection dating rules and assembled into panels designed to assess the impact of important sources of measurement error with incidence assays such as viral subtype, elite host control of viraemia and antiretroviral treatment. We present the specific rationale for several of these innovations, and discuss important resources for assay developers and researchers that have recently become available. Finally, we summarize the key remaining steps on the path to development and implementation of reliable assays for monitoring HIV incidence at a population level.


2020 ◽  
Vol 10 (7) ◽  
pp. 2206
Author(s):  
Anas A. AbuDaqa ◽  
Ashraf Mahmoud ◽  
Marwan Abu-Amara ◽  
Tarek Sheltami

Peer-to-peer (P2P) content distribution and file sharing systems aim to facilitate the dissemination of large files over unreliable networks. Network coding is a transmission technique that has captured the interest of researchers because of its ability to increase throughput and robustness of the network, and decrease the download time. In this survey paper, we extensively summarize, assess, compare, and classify the most recently used techniques to improve P2P content distribution systems performance using network coding. To the best of our knowledge, this survey is the first comprehensive survey that specifically focuses on the performance of network coding based P2P file sharing systems.


2010 ◽  
Vol 12 (1) ◽  
pp. 86-94 ◽  
Author(s):  
Amir Hesam Salavati ◽  
Babak Hossein Khalaj ◽  
Pedro M. Crespo ◽  
Mohammad Reza Aref

2015 ◽  
Vol 19 (11) ◽  
pp. 4689-4705 ◽  
Author(s):  
D. Lee ◽  
P. Ward ◽  
P. Block

Abstract. Globally, flood catastrophes lead all natural hazards in terms of impacts on society, causing billions of dollars of damages annually. Here, a novel approach to defining high-flow seasons (3-month) globally is presented by identifying temporal patterns of streamflow. The main high-flow season is identified using a volume-based threshold technique and the PCR-GLOBWB model. In comparison with observations, 40 % (50 %) of locations at a station (sub-basin) scale have identical peak months and 81 % (89 %) are within 1 month, indicating fair agreement between modeled and observed high-flow seasons. Minor high-flow seasons are also defined for bi-modal flow regimes. Identified major and minor high-flow seasons together are found to well represent actual flood records from the Dartmouth Flood Observatory, further substantiating the model's ability to reproduce the appropriate high-flow season. These high-spatial-resolution high-flow seasons and associated performance metrics allow for an improved understanding of temporal characterization of streamflow and flood potential, causation, and management. This is especially attractive for regions with limited observations and/or little capacity to develop early warning flood systems.


2015 ◽  
Vol 2015 ◽  
pp. 1-12 ◽  
Author(s):  
M. Anandaraj ◽  
P. Ganeshkumar ◽  
K. P. Vijayakumar ◽  
K. Selvaraj

Network coding (NC) makes content distribution more effective and easier in P2P content distribution network and reduces the burden of the original seeder. It generalizes traditional network routing by allowing the intermediate nodes to generate new coded packet by combining the received packets. The randomization introduced by network coding makes all packets equally important and resolves the problem of locating the rarest block. Further, it reduces traffic in the network. In this paper, we analyze the performance of traditional network coding in P2P content distribution network by using a mathematical model and it is proved that traffic reduction has not been fully achieved in P2P network using traditional network coding. It happens due to the redundant transmission of noninnovative information block among the peers in the network. Hence, we propose a new framework, called I2NC (intelligent-peer selection and incremental-network coding), to eliminate the unnecessary flooding of noninnovative coded packets and thereby to improve the performance of network coding in P2P content distribution further. A comparative study and analysis of the proposed system is made through various related implementations and the results show that 10–15% of traffic reduced and improved the average and maximum download time by reducing original seeder’s workload.


Author(s):  
Nataliia Melnykova ◽  
Nataliya Shakhovska ◽  
Michal Gregus ◽  
Volodymyr Melnykov ◽  
Mariana Zakharchuk ◽  
...  

The study was conducted on applying machine learning and data mining methods to personalizing the treatment. This allows investigating individual patient characteristics. Personalization is built on the clustering method and associative rules. It was suggested to determine the average distance between instances for optimal performance metrics finding. The formalization of the medical data pre-processing stage for finding personalized solutions based on current standards and pharmaceutical protocols is proposed. The model of patient data is built. The paper presents the novel approach to clustering built on ensemble of cluster algorithm with better than k-means algorithm Hopkins metrics. The personalized treatment usually is based on decision tree. Such approach requires a lot of computation time and cannot be paralyzed. Therefore, it is proposed to classify persons by conditions, to determine deviations of parameters from the normative parameters of the group, as well as the average parameters. This made it possible to create a personalized approach to treatment for each patient based on long-term monitoring. According to the results of the analysis, it becomes possible to predict the optimal conditions for a particular patient and to find the medicaments treatment according to personal characteristics.


Abstract A novel approach for estimating precipitation patterns is developed here and applied to generate a new hydrologically corrected daily precipitation dataset, called RAIN4PE (for ‘Rain for Peru and Ecuador’), at 0.1° spatial resolution for the period 1981-2015 covering Peru and Ecuador. It is based on the application of a) the random forest method to merge multi-source precipitation estimates (gauge, satellite, and reanalysis) with terrain elevation, and b) observed and modeled streamflow data to firstly detect biases and secondly further adjust gridded precipitation by inversely applying the simulated results of the eco-hydrological model SWAT (Soil and Water Assessment Tool). Hydrological results using RAIN4PE as input for the Peruvian and Ecuadorian catchments were compared against the ones when feeding other uncorrected (CHIRP and ERA5) and gauge-corrected (CHIRPS, MSWEP, and PISCO) precipitation datasets into the model. For that, SWAT was calibrated and validated at 72 river sections for each dataset using a range of performance metrics, including hydrograph goodness of fit and flow duration curve signatures. Results showed that gauge-corrected precipitation datasets outperformed uncorrected ones for streamflow simulation. However, CHIRPS, MSWEP, and PISCO showed limitations for streamflow simulation in several catchments draining into the Paċific Ocean and the Amazon River. RAIN4PE provided the best overall performance for streamflow simulation, including flow variability (low-, high- and peak-flows) and water budget closure. The overall good performance of RAIN4PE as input for hydrological modeling provides a valuable criterion of its applicability for robust countrywide hydrometeorological applications, including hydroclimatic extremes such as droughts and floods.


2021 ◽  
Author(s):  
Neha Periwal ◽  
Priya Sharma ◽  
Pooja Arora ◽  
Saurabh Pandey ◽  
Baljeet Kaur ◽  
...  

Classification among coding (CDS) and non-coding RNA (ncRNA) sequences is a challenge and several machine learning models have been developed for the same. Since the frequency of curated coding sequences is many-folds as compared to that of the ncRNAs, we devised a novel approach to work with the complete datasets from fifteen diverse species. In our proposed novel binary approach, we replaced all the A,T with 0 and G,C with 1 to obtain a binary form of coding and ncRNAs. The k-mer analysis of these binary sequences revealed that the frequency of binary patterns among the coding and ncRNAs can be used as features to distinguish among them. Using insights from these distinguishing frequencies, we used k-nearest neighbour classifier to classify among them. Our strategy is not only time-efficient but leads to significantly increased performance metrics including Matthews correlation coefficient (MCC) for some species like P. paniscus, M. mulatta, M. lucifugus, G. gallus, C. japonica, C. abingdonii, A. carolinensis, D. melanogaster and C. elegans when compared with the conventional ATGC approach. Additionally, we also show that the values of MCC obtained for diverse species tested on the model based on H. sapiens correlated with the geological evolutionary timeline thereby further strengthening our approach. Therefore, we propose that CDS and ncRNAs can be efficiently classified using 2-character frequency as compared to 4-character frequency of ATGC approach. Thus, our highly efficient binary approach can replace the more complex ATGC approach successfully.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Aminah Robinson Fayek ◽  
Alireza Golabchi

PurposeThe purpose of this study is to provide a framework to identify performance metrics for evaluating research and development collaborations.Design/methodology/approachThe framework is developed through a review of similar centres and academic studies, followed by surveys and interviews of researchers and industry practitioners for the case of the Construction Innovation Centre (CIC). The proposed framework consists of identification of existing industry research and development needs, development of a research roadmap representing top research priorities, and identification of the most important services to provide to industry partners, which form the context for defining performance evaluation metrics.FindingsA research roadmap is presented, outlining top research areas and methods and a list of the most in-demand services including research, practical and training and outreach services. Metrics for evaluating the performance of proposed projects, completed projects and a collaborative research centre are also identified.Originality/valueThis study presents a novel approach to defining performance metrics for the evaluation of research and development collaborations. The approach and findings of this study can be adopted by other collaborative research centres and initiatives around the world to develop effective metrics for performance measurement. The proposed framework provides a platform for defining performance metrics in the context of the research roadmap and top-priority services applicable to the research and development collaboration.


Sign in / Sign up

Export Citation Format

Share Document