Assuming TCP Reno is the protocol experiencing the behavior described below. Ini
ID: 674095 • Letter: A
Question
Assuming TCP Reno is the protocol experiencing the behavior described below.
Initial Threshold = 36 segments
At the transmission round that the 40th segment is sent, segment loss is detected by a timeout.
At the 23rd transmission round, segment loss is detected by triple duplicate ACKs.
Plot the evolution of the TCP’s congestion window from the 1st through the 30th transmission round.
On the curve, mark clearly the (transmission round, congestion window size) at the 1st point, the last point, and all the turning points.
Explanation / Answer
The Internet first experienced a problem called congestion collapse in the 1980s. Here is a recollection of the event by Craig Partridge, Research Director for the Internet Research Department at BBN Technologies (Reproduced by permission of Craig Partridge): Bits of the network would fade in and out, but usually only for TCP. You could ping. You could get a UDP packet through. Telnet and FTP would fail after a while. And it depended on where you were going (some hosts were just fine, others flaky) and time of day (I did a lot of work on weekends in the late 1980s and the network was wonderfully free then). Around 1pm was bad (I was on the East Coast of the US and you could tell when those pesky folks on the West Coast decided to start work...). Another experience was that things broke in unexpected ways – we spent a lot of time making sure applications were bullet-proof against failures. One case I remember is that lots of folks decided the idea of having two distinct DNS primary servers for their subdomain was silly – so they’d make one primary and have the other one do zone transfers regularly. Well, in periods of congestion, sometimes the zone transfers would repeatedly fail – and voila, a primary server would timeout the zone file (but know it was primary and thus start authoritatively rejecting names in the domain as unknown). Finally, I remember being startled when Van Jacobson first described how truly awful network performance was in parts of the Berkeley campus. It was far worse than I was generally seeing. In some sense, I felt we were lucky that the really bad stuff hit just where Van was there to see it.
Since intermediate nodes can act as controllers and measuring points at the same time, a congestion control scheme could theoretically exist where neither the sender nor the receiver is involved. This is, however, not a practical choice as most network technologies are designed to operate in a wide range of environment conditions, including the smallest possible setup: a sender and a receiver, interconnected via a single link. While congestion collapse is less of a problem in this scenario, the receiver should still have some means to slow down the sender if it is busy doing more pressing things than receiving network packets or if it is simply not fast enough. In this case, the function of informing the sender to reduce its rate is normally called flow control. The goal of flow control is to protect the receiver from overload, whereas the goal of congestion control is to protect the network. The two functions lend themselves to combined implementations because the underlying mechanism is similar: feedback is used to tune the rate of a flow. Since it may be reasonable to protect both the receiver and the network from overload at the same time, such implementations should be such that the sender uses a rate that is the minimum of the results obtained with flow control and congestion control calculations. Owing to these resemblances, the terms ‘flow control’ and ‘congestion control’ are sometimes used synonymously, or one is regarded as a special case of the other.
Related Questions
drjack9650@gmail.com
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.