1

Question: What are the current main challenges in TCP congestion control algorithms? Are there any foundational trade offs that algorithms need to make? E.g. minimizing router buffer sizes while quickly reacting to available bandwidth on noisy links?

Context: I was reading some of the old discussions from 20 years ago about router buffer bloat. I haven't followed the topic much since back then when it was discussed a lot in Linux. Assume we use something like TCP Reno which has a window size $W$ and when detecting congestion we set the window size to $W/2$. Further, let's have a network topology with a single flow like:

Sender ---- Router ---- Receiver

where the link between the router and receiver is slower and has some capacity C. If the router has a buffer size B, then the largest possible TCP window size without loss is:

$$W_{max} = C\times (T + B/C)$$

while the minimum window size that fully utilizes the network is:

$$W_{min} = C\times T$$

Here $T$ is the RTT time of the network discounting queuing delay on the router.

If we now want to choose the smallest possible buffer size that guarantees network utilization with a single flow, we must choose $B$ such that

$$W_{max} / 2 = W_{min}$$

The solution for this is $B = T \times C$ or $B = RTT \times C$. The same equation can be solved if on congestion we set $W' = \alpha W$. Then if $\alpha \to 1$ we have that $B \to 0$.

In the above case, we see that if we multiply the window size with a constant arbitrarily close to $1$, then the required buffer's size tend to zero. In other words, we can tune the TCP congestion control algorithm to make required buffers arbitrary small. However, if the scaling factor is close to one, it would perform terribly if we suddenly need to share bandwidth with someone else as we would have loads of packet loss before the window gets small enough.

0 Answers0