Accurate mathematical modeling and solution of TCP congestion window size distribution

2020 ◽  
Vol 163 ◽  
pp. 195-201 ◽  
Author(s):  
Gan Luan ◽  
Norman C. Beaulieu
2018 ◽  
Vol 0 (0) ◽  
Author(s):  
Reza Poorzare ◽  
Siamak Abedidarabad

AbstractThere is a misunderstanding in Optical Burst Switching (OBS) networks about the congestion status in the network that can cause a reduction in the performance of the networks. OBS networks are bufferless in their nature so when a burst drop happens in the network it can be because of the congestion or contention in the network but TCP cannot distinguish it is due to the congestion or contention. TCP wrongly decreases the congestion window size (cwnd) and causes significant reduction of the network performance. In this paper we are trying to employ a new algorithm by using fuzzy logic and some thresholds to divide the network into several areas then we can solve the problem. This new scheme can help us to distinguish a burst drop is because of the congestion or a burst contention in the network. Extensive simulative studies show that the proposed algorithm outperforms TCP Vegas in terms of throughput and packet delivery count.


Author(s):  
Nelson Luís Saldanha da Fonseca ◽  
Neila Fernanda Michel

In response to a series of collapses due to congestion on the Internet in the mid-’80s, congestion control was added to the transmission control protocol (TCP) (Jacobson, 1988), thus allowing individual connections to control the amount of traffic they inject into the network. This control involves regulating the size of the congestion window (cwnd) to impose a limit on the size of the transmission window. In the most deployed TCP variant on the Internet, TCP Reno (Allman, Floyd, & Partridge, 2002), changes in congestion window size are driven by the loss of segments. Congestion window size is increased by 1/cwnd for each acknowledgement (ack) received, and reduced to half for the loss of a segment in a pattern known as additive increase multiplicative decrease (AIMD). Although this congestion control mechanism was derived at a time when the line speed was of the order of 56 kbs, it has performed remarkably well given that the speed, size, load, and connectivity of the Internet have increased by approximately six orders of magnitude in the past 15 years. However, the AIMD pattern of window growth seriously limits efficienct operation of TCP-Reno over high-capacity links, so that the transport layer is the network bottleneck. This text explains the major challenges involved in using TCP for high-speed networks and briefly describes some of the variations of TCP designed to overcome these challenges.


Author(s):  
Mahendra Suryavanshi ◽  
Dr. Ajay Kumar ◽  
Dr. Jyoti Yadav

Recent data centers provide dense inter-connectivity between each pair of servers through multiple paths. These data centers offer high aggregate bandwidth and robustness by using multiple paths simultaneously. Multipath TCP (MPTCP) protocol is developed for improving throughput, fairly sharing network link capacity and providing robustness during path failure by utilizing multiple paths over multi-homed data center networks. Running MPTCP protocol for latency-sensitive rack-local short flows with many-to-one communication pattern at the access layer of multi-homed data center networks creates MPTCP incast problem. In this paper, Balanced Multipath TCP (BMPTCP) protocol is proposed to mitigate MPTCP incast problem in multi-homed data center networks. BMPTCP is a window-based congestion control protocol that prevents constant growth of each worker’s subflow congestion window size. BMPTCP computes identical congestion window size for all concurrent subflows by considering bottleneck Top of Rack (ToR) switch buffer size and increasing count of concurrently transmitting workers. This helps BMPTCP to avoid timeout events due to full window loss at ToR switch. Based on current congestion situation at ToR switches, BMPTCP adjust transmission rates of each worker’s subflow so that total amount of data transmitted by all concurrent subflows does not overflow bottleneck ToR switch buffer. Simulation results show that BMPTCP effectively alleviates MPTCP incast. It improves goodput, reduces flow completion time as compared to existing MPTCP and EW-MPTCP protocols.


Sign in / Sign up

Export Citation Format

Share Document