An Efficient Network Utilization Scheme for Optical Burst Switched Networks

2010 ◽  
Vol 3 (2) ◽  
pp. 34-49
Author(s):  
Rajneesh Randhawa ◽  
J.S. Sohal ◽  
Amit Kumar Garg ◽  
R. S. Kaler

Optical Burst Switching (OBS) is one of the most important switching technologies for future IP over wavelength division multiplexing (WDM) networks. In OBS Network, the burst assembly technique is a challenging issue in the implementation of the system. Burst assembly influences burst characteristics, which negatively impacts network performance. In this paper, the authors propose an efficient hybrid burst assembly approach, which is based on approximate queuing network model. To reduce the time complexity, an approximate queuing network model has been considered. Throughput performance has been investigated, taking into account both burst loss probability and time complexity. Simulation results have shown that the proposed hybrid approach based on variable burst length threshold and fixed maximum time limitation provides Simulation results have also shown a good trade-off between burst blocking performance and scheduling time.

2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Pinar Kirci ◽  
Abdul Halim Zaim

Optical technology gains extensive attention and ever increasing improvement because of the huge amount of network traffic caused by the growing number of internet users and their rising demands. However, with wavelength division multiplexing (WDM), it is easier to take the advantage of optical networks and optical burst switching (OBS) and to construct WDM networks with low delay rates and better data transparency these technologies are the best choices. Furthermore, multicasting in WDM is an urgent solution for bandwidth-intensive applications. In the paper, a new multicasting protocol with OBS is proposed. The protocol depends on a leaf initiated structure. The network is composed of source, ingress switches, intermediate switches, edge switches, and client nodes. The performance of the protocol is examined with Just Enough Time (JET) and Just In Time (JIT) reservation protocols. Also, the paper involves most of the recent advances about WDM multicasting in optical networks. WDM multicasting in optical networks is given as three common subtitles: Broadcast and-select networks, wavelength-routed networks, and OBS networks. Also, in the paper, multicast routing protocols are briefly summarized and optical burst switched WDM networks are investigated with the proposed multicast schemes.


2018 ◽  
Vol 0 (0) ◽  
Author(s):  
T. Ravi Prakash Rao ◽  
V. Malleswara Rao

AbstractProviding reliable connection-oriented services in wavelength division multiplexing (WDM) networks becomes challenging because of bad channel quality and overload. Most of the existing works have not considered node failures and congestion which are the essential characteristics for survivable routing. In this paper, a load and link aware protection switching technique for survivable routing in WDM networks is proposed. The technique estimates k-shortest paths for a connection request. Based on four different path possibilities, it provides the best path as connection path in terms of bandwidth. For establishing the primary path, the least virtual hop first routing algorithm is used. If the source node receives either failure or congestion warning message from the intermediate nodes, it triggers the load balanced rerouting algorithm for establishing the backup path. The proposed technique is simulated in NS-2, and the simulation results prove the efficiency of the scheme.


2021 ◽  
Vol 13 (11) ◽  
pp. 5889
Author(s):  
Faiza Hashim ◽  
Khaled Shuaib ◽  
Farag Sallabi

Electronic health records (EHRs) are important assets of the healthcare system and should be shared among medical practitioners to improve the accuracy and efficiency of diagnosis. Blockchain technology has been investigated and adopted in healthcare as a solution for EHR sharing while preserving privacy and security. Blockchain can revolutionize the healthcare system by providing a decentralized, distributed, immutable, and secure architecture. However, scalability has always been a bottleneck in blockchain networks due to the consensus mechanism and ledger replication to all network participants. Sharding helps address this issue by artificially partitioning the network into small groups termed shards and processing transactions parallelly while running consensus within each shard with a subset of blockchain nodes. Although this technique helps resolve issues related to scalability, cross-shard communication overhead can degrade network performance. This study proposes a transaction-based sharding technique wherein shards are formed on the basis of a patient’s previously visited health entities. Simulation results show that the proposed technique outperforms standard-based healthcare blockchain techniques in terms of the number of appointments processed, consensus latency, and throughput. The proposed technique eliminates cross-shard communication by forming complete shards based on “the need to participate” nodes per patient.


2012 ◽  
Vol 50 (5) ◽  
pp. 48-55 ◽  
Author(s):  
Pawel Wiatr ◽  
Paolo Monti ◽  
Lena Wosinska

1994 ◽  
Vol 19 (1-2) ◽  
pp. 69-80 ◽  
Author(s):  
Dr. Jon Warwick

2021 ◽  
Author(s):  
Mokhles Mezghani ◽  
Mustafa AlIbrahim ◽  
Majdi Baddourah

Abstract Reservoir simulation is a key tool for predicting the dynamic behavior of the reservoir and optimizing its development. Fine scale CPU demanding simulation grids are necessary to improve the accuracy of the simulation results. We propose a hybrid modeling approach to minimize the weight of the full physics model by dynamically building and updating an artificial intelligence (AI) based model. The AI model can be used to quickly mimic the full physics (FP) model. The methodology that we propose consists of starting with running the FP model, an associated AI model is systematically updated using the newly performed FP runs. Once the mismatch between the two models is below a predefined cutoff the FP model is switch off and only the AI model is used. The FP model is switched on at the end of the exercise either to confirm the AI model decision and stop the study or to reject this decision (high mismatch between FP and AI model) and upgrade the AI model. The proposed workflow was applied to a synthetic reservoir model, where the objective is to match the average reservoir pressure. For this study, to better account for reservoir heterogeneity, fine scale simulation grid (approximately 50 million cells) is necessary to improve the accuracy of the reservoir simulation results. Reservoir simulation using FP model and 1024 CPUs requires approximately 14 hours. During this history matching exercise, six parameters have been selected to be part of the optimization loop. Therefore, a Latin Hypercube Sampling (LHS) using seven FP runs is used to initiate the hybrid approach and build the first AI model. During history matching, only the AI model is used. At the convergence of the optimization loop, a final FP model run is performed either to confirm the convergence for the FP model or to re iterate the same approach starting from the LHS around the converged solution. The following AI model will be updated using all the FP simulations done in the study. This approach allows the achievement of the history matching with very acceptable quality match, however with much less computational resources and CPU time. CPU intensive, multimillion-cell simulation models are commonly utilized in reservoir development. Completing a reservoir study in acceptable timeframe is a real challenge for such a situation. The development of new concepts/techniques is a real need to successfully complete a reservoir study. The hybrid approach that we are proposing is showing very promising results to handle such a challenge.


Sign in / Sign up

Export Citation Format

Share Document