Internet Speed Limits

Aug 11, 2010 21:38

George Ou presents an interesting networking experiment. He's trying to demonstrate that CDNs are a bigger "threat" to application fairness than routing and QoS policies would be. In evidence, he shows how a YouTube server located 47ms away gets a 2.3x faster TCP connection than another server located 105ms away. In isolation they could both saturate his link, but when competing TCP favors the nearer server. In fact, the nearby TCP connection causes more jitter to delay-sensitive traffic! The moral, he asks, is why colocation of a server in the ISP (the Youtube) is OK, while routing policies that would seek to correct this imbalance are bad?

Without getting into the gory policy details, I do think that "I want my packets to get to the head of the queue" arrangements may be OK, but what we observed with Comcast and Bittorrent was quite a different sort of non-neutrality.

But, my interest was piqued by his link to an earlier article explaining how TCP window size and latency determine maximum throughput. This is a fairly basic computation that anybody taking a networking class will do, which is why there's a solution: TCP window scaling. I think his write-up unfairly dismisses this widely-adopted extension to the protocol. He says that this only multiplies the upper limits; it does not change that nearby servers can communicate faster, in theory, than longer-latency sources. But, the question has to be one of practice: with up to a GB of window, can you saturate the bottleneck link? If you can, then the theoretical speed limit of a closer server is moot (modulo other effects like the experiment above.)

(Also I'm not sure his speed calculation should use round-trip latency rather than unidirectional latency. The receiver is allowed to send acks more than once per window, and the steady state should be one banwidth-delay product in flight through the network. It's been a while since my last networking class. ;)

On the other hand, maybe I just have a naive network-researcher sort of view of the world and maybe TCP scaling isn't turned on for most servers. So, here's the results of an informal experiment. I collected TCP headers exiting my house for a period of about 20 hours. How many of the servers respond with a window scaling?

Outbound connections:
9724 with window scaling
* 3790 with wscale 0
* 5888 with wscale 2
* 3 with wscale 7
* 43 with wscale 8
4055 with no window scaling option

I am not at all sure which machines or applications in my house are using such large wscale options.

Inbound responses:

7794 with window scaling
* 1333 with wscale 0
* 219 with wscale 1
* 198 with wscale 2
* 96 with wscale 3
* 337 with wscale 4
* 848 with wscale 5
* 2105 with wscale 6
* 1380 with wscale 7
* 629 with wscale 8
* 649 with wscale 9
* 103 with wscale 11
5993 without window scaling

This suggests that a fair number of servers do, in fact, support much higher speed limits than his examples. wscale 6 is a 65536 * 2^6 byte window, so even with a latency of 200ms we have a theoretical TCP throughput of 167Mbit/sec, far higher than the likely bottleneck connection.

networking, geek, internet

Previous post Next post
Up