488 private links
The history of TCP congestion control is long enough to fill a book (and we did) but the work done in Berkeley, California, from 1986 to 1998 casts a long shadow, with Jacobson’s 1988 SIGCOMM paper ranking among the most cited networking papers of all time.
Slow-start, AIMD (additive increase, multiplicative decrease), RTT estimation, and the use of packet loss as a congestion signal were all in that paper, laying the groundwork for the following decades of congestion control research. One reason for that paper's influence, I believe, is that the foundation it laid was solid, while it left plenty of room for future improvements–as we see in the continued efforts to improve congestion control today.
And the problem is fundamentally hard: we’re trying to get millions of end-systems that have no direct contact with each other to cooperatively share the bandwidth of bottleneck links in some moderately fair way using only the information that can be gleaned by sending packets into the network and observing when and whether they reach their destination. //
It seems clear that there is no such thing as the perfect congestion control approach, which is why we continue to see new papers on the topic 35 years after Jacobson’s. But the internet's architecture has fostered the environment in which effective solutions can be tested and deployed to achieve distributed management of shared resources.
In my view that’s a great testament to the quality of that architecture. ®