What We Talk about When We Talk about Bandwidth

internet_is_made_of_pipes.jpg

Cellular carriers' competing claims as to what constitutes 4G (fourth-generation) cellular data networks got me to thinking about how speed is only one part of the story about why allegedly faster networks are being built. I've been writing about Wi-Fi since 2000, and that informs my thinking, because Wi-Fi has matured to a point where raw speed doesn't have the same marketing value it once did, because networks are generally fast enough. Instead, multiple properties come into play.

I want to talk about bandwidth, throughput, latency, and capacity, and how each of these items relates to one another.

Let me start all folksy with analogies. For simplicity's sake, let's consider a medium-sized city that serves water to all its residents through one central reservoir. The reservoir's capacity represents the total pool of water it can deliver at one time to residents through pipes of varying sizes and at different distances.

The diameter of the pipe, of course, determines how much water can pass from the reservoir to your particular tap. All of the pipes are from somewhat to very leaky, so the diameter only represents the potential water you could receive at any given time (under the same pressure), while the leaks reduce that. You receive your raw diameter's water after those leaks take their toll. For people who live far away from the reservoir, I turn the pressure way down, because too much water leaks out. Thus, they receive less water with taps wide open than their pipes' diameters would suggest.

In nearly every part of town, the pipes bringing water to neighborhoods have way too many smaller diameter pipes feeding homes and businesses. If everyone flushes his or her toilet at once, only a trickle of water flows out, albeit continuously. I'm cheap, too, in my thought experiment water system: I've brought very small diameter pipes into many areas so that the diameter of every home or building's feed is greater than that of the pipe that feeds a whole neighborhood. What a bastard.

Finally, I designed this city water supply in a peculiar fashion. In normal city systems, water is under pressure through the system all the way to your tap. When you open up a faucet, the pressure immediately pushes water out. The quantity is relative to the aperture of your faucet and the system's pressure. At 80 psi, you can get a shower head that sprays out one gallon per minute or two.

But in my system, your faucet's handle is directly and mechanically connected to a pipe far away from your home, at a water distribution point or back at the reservoir. Each time you turn off a faucet the valve at the far end is shut down and all the water drains out. When you need water, you have to wait for the faucet to open up that remote valve, and for the water to reach you from wherever it starts.

Because I'm a water nazi, my system has sensors that only let water flow if you have a receptacle being filled. Thus, if you want to fill up a gallon jug of water, you wait a moment for the water to flow, but it can fill quite fast. If you're having a big sit-down dinner for 100 people, my sensors turn off the water between each glass you fill. It can take a good 30 minutes for those 100 glasses. (Shouldn't you fill up pitchers instead? I'm not your mother.)

Good? Let's switch off the folksy bit.

Capacity. The reservoir represents the overall pool of bandwidth or capacity of the network. Even though it's constantly replenished, you can't take more water out at once than fits in the pool. Capacity is measured by how much raw bandwidth is available across part of the system or the whole, depending on what you want to manage. Picture central POPs (points of presence) in which fiber brings gigabits per second as your reservoirs, which have feeders out to water towers and smaller pools in the form of metropolitan Ethernet, ATM, T3s, T1s, and other technologies eventually reaching DSLAMs, cable head-ends, and cellular base stations.

Bandwidth. The diameter of a pipe corresponds to how much water may be delivered to a home; ditto, a given network pipe has a certain raw speed. For wired or wireless connections, that's dependent on the two sides of a transaction handshaking to agree on encodings that determine the speed, which, in turn are based on distance.

Throughput. After leaks, how much water arrives? Networks are designed for particular signaling rates for given encodings or protocols. Gigabit Ethernet may push a billion symbols a second through a wire, and 802.11n with three receive/transmit chains can gross 450 Mbps over the air. But bits used for framing packets, error correction, and other overhead can reduce that by a little with Ethernet and quite a lot with Wi-Fi, even when the Wi-Fi signal is in perfect isolation from interference. Throughput is your real data rate after overhead is subtracted: how long did it take for a 100 MB file to get from A to B multiplied by bits, divided by seconds?

Latency. This is the toughest one to explain in analogies or with actual technology. Latency is a measure of the round-trip signal, but may have little to do with the distance traversed, even if many thousands of miles. Rather, latency occurs at each hop along a packet's path. Slower links increase latency as do more links, and subtler system design. All of those contribute to taking longer to prime the pump--for water to start coming out of the faucet--when a system has high latency between two points or in the local link. Latency is most noticeable in two-way communications, like voice and video chat, because you wind up responding out of sync with other party (unless you say, "over"). But it also makes interactive Web sites seem non-responsive because they sluggishly react to your actions. Latency can cause jittery or lower-quality video if a device can't provide feedback to a streaming server to throttle down or open over very short periods of time.

2G and 3G cellular data networks, 802.11b and 802.11g Wi-Fi, and pre-switched (hub-only) Ethernet networks were all about getting enough capacity and bandwidth to deliver a usable amount of data to every part that wanted in on the action. With switching, Ethernet essentially creates a private pool for each port on the switch, allowing full-speed communication between ports without sharing the bandwidth with other devices on other ports. Neat, but not practical in wireless communications. (There's switched wireless, but it has more to do with managing dumb access points with smart switches than in adding capacity by established conflict-free channels.)

So-called 4G networks (which aren't 4G by the standards of the international body that sets such standards, the ITU-R, but, oh well) and the fastest 802.11n networks aren't designed to bring the maximum possible bandwidth to every party. Rather, there's the expectation that devices will be throttled well below what's possible most of the time to provide a greater reservoir of available bandwidth. That's partly why you see average ranges and peak speeds advertised. The peak speeds are often for sustained downloads, and are balanced against other local users' needs. Even if the bandwidth is available, you might be throttled to allow bursty uses by other parties.

With Wi-Fi, the three-radio 450 Mbps 802.11n hardware now on the market isn't intended to give every user 300 Mbps or more of throughput. Rather, the designs provide faster connections at greater distances, fill in dead areas, and allow more simultaneous users. The same is true of 600 Mbps 802.11n (not quite here yet), and the future 802.11ac 1 Gbps-or-faster update to Wi-Fi for 5 GHz. Better networks, not faster networks, is now the general goal.

Latency has been a particular focus in cellular networks. The voice component of 3G networks is designed separately from data in order to keep latency as low as possible. (It's all bits in 2G and 3G, but you can handle signaling, framing, priority, and other factors differently.) With 4G networks, the differentiation between voice and data is supposed to disappear; everything uses Internet protocol (IP), which means latency has to be locked down to prevent poor quality phone calls and jittery video.

Mobile network companies are trying to boast of new networks having high speeds, but speed isn't the bugbear of cell operators. WiMax, LTE, and HSPA+ networks are or are going to be much faster--but they're also going to be deeper and broader.

And, yes, I did just explain that the Internet is a series of pipes.

Image by Dann Stayskal via Creative Commons.

43

      1. conduits of correspondence?
        hoses of homogeneity?
        mains of metaphors?

        i’m so confused…
        but humbled by your insight into all things interwebbish

  1. All the pipes are spiraled; some have loops more densely packed than others. The distance the water needs to travel to get from point A to B is longer if the loops are more densely packed, which means the water takes longer. That’s latency.

  2. Yes, it is a series of tubes. I’ve even explained it to people before, but they convinced me that the whole “dumptruck” meme was funny enough to ignore reality.

  3. It’s rare that an article almost, but not quite, completely fails to explain a technical issue such as this.

    At least, I’m mystified. Anyone else?

    1. The constructive comment would be to explain what you thought was wrong, and for me, if I agree, to update the post to better reflect reality.

      Do you not like analogies?

      1. Analogies can be useful when they tap into common knowledge. (Even these are frequently misleading to the point of uselessness – take the planetary model of the atom as a prime example.) In this article though, you end up having to educate the user about water systems in order to make the connection with data networks. Worse, because latency isn’t an issue in the water system, you have to introduce the “faucet at the source” tweak. I’m reminded of Jamie Zawinski’s tongue-in-cheek observation, “Some people, when confronted with a problem, think ‘I know, I’ll use regular expressions.’ Now they have two problems.” It would have been simpler to just explain the issues without resorting to the water analog.

        1. Interesting observation. I still believe that people relate more to pipes and water, even if they don’t know all the workings, than to the actual technical part, and now they have a boring cocktail party story to tell, to boot!

  4. I have been waiting for this post ever since BoingBoing mocked Ted Stevens for saying that the Internet is a series of tubes. I never really got that mocking Ted Stevens thing.

  5. I’m not actually not that mystified because there wasn’t much to be mystified about – other than where the content of the article went. Maybe there’s a blockage in one of my internet pipes?

  6. Thanks for the article, Glenn, nicely done. The only mystery remaining for me is what I’m going to do when I burn through the ever-lower “unlimited” bandwidth cap that much faster…

  7. Stephens was (just) sort of right, but I also don’t feel bad for mocking him at all. It was the rest of what he said that made him look clueless. Among several other issues, like trying to oppose net-neutrality.

    secure.wikimedia.org/wikipedia/en/wiki/Series_of_tubes

  8. “It would have been simpler to just explain the issues without resorting to the water analog.”

    I disagree. How many people still think Facebook won’t sell their private info, or used to think just a few years ago, that the AOL icon on their desktop _is_ the internet?

    Have you every actually tried to explain something like technical networking details, to someone, and which someones?

    Also, some of the most common intro uni textbooks use plumbing of water, as well. I don’t think there’s any better metaphor, even if it’s not perfect.

    1. “Have you every actually tried to explain something like technical networking details, to someone, and which someones?”. Yes, explaining science and technology to lay-people is central to how I earn a (self-employed) living. In my experience, analogies are typically only useful when you’ve got a reasonable understanding of both problem domains. Take, for example, the third paragraph of Neil Stephenson’s classic “Mother Earth Board” article. Poetic as the “Wires warp cyberspace …” paragraph is, you need to have a passing understanding of *both* general relativity and networks to really understand what Stephenson is trying to convey. It works as literature, but not as a device for teaching the uninitiated about either differential geometry or network topology.

  9. My wife came home the other day, shaking her head because a co-worker thought the internet was in her monitor. She was getting a new monitor that day, and was concerned because she wasn’t sure the new one would have the internet in it.

    Good article, but I’m still only a few degrees less ignorant than my wife’s co-worker.

  10. A good analogy for ADSL (and cellular) signal “intensity” would be to relate the inverse square law to hydraulic headloss :V

  11. Good article! However, I don’t agree that bandwidth and capacity are two different things – they are in fact the same thing. Regardless if one part of the network has 100 Gbit of “bandwidth” and another has 100 Mbit of “bandwidth” and if a third has 1 Mbit of “bandwidth – the “capacity” of any session using those three equals the slowest one – thus the maximum capacity is 1 Mbit. It’s also misleading to state that bandwidth has anything to do with the handshaking or distance … bandwidth is determined well underneath TCP (where syn/ack and latency do limit throughput) – but for things like UDP (or RTP for VoIP sitting on top of UDP…), throughput is NOT affected by latency (two-way comms are of course, but the effective bits/sec is not slower on a 200ms connection vs. a 20ms connection over UDP…). To “net” this out (sorry…, couldn’t resist..), there is capacity and throughput OR there is bandwidth and throughput (take your pick between capacity and bandwidth…), but there isn’t all three.

  12. “Bandwidth. The diameter of a pipe corresponds to how much water may be delivered to a home; ditto, a given network pipe has a certain raw speed.”

    Bandwidth goes as the square of the diameter, not the diameter.

  13. I always thought it was trucks carrying packets on roads of different widths having faster or slower toll-gates…

  14. Its like explaining cats to dogs, neither one understands a thing you’re saying they just want food and a nap and think that’s what you are talking about.

    If someone can explain how any of the cell phone providers can justify charging $20.00 a month for text messages when you’ve already paid for the bandwidth, then we’ll be on track to explain why my satellite TV has “paid programming” late at night on stations that I’ve already paid for…

    Content is king, context is kingmaker.

  15. @33: “misleading to state that bandwidth has anything to do with the handshaking or distance”

    Except I didn’t state either of those things. For handshaking, I was explaining how two sides negotiate the bandwidth of the system, not that handshaking affects bandwidth itself. If you can’t agree on the encoding, you don’t have a network. Ditto, distance affects the encoding that can be negotiated in wired or wireless.

  16. Ugh. This article is horrible. I mean, not for trying – you’re trying. But doggamn, that was horrible.

    As a corrective: here’s a couple of points about how bandwidth works. This leads to all sorts of policy questions eventually, and we can disagree about them, but we can’t have a meaningful disagreement with this sort of crap as the baseline.

    Communication pipelines are real-time, limited by their design. You either use them for some meaningful comminication, or you don’t.

    Most people, most of the time, don’t use that much bandwidth. As TV migrates to the net, that’s becoming different, but the baseline still holds.

    So, if you bundle hundreds of folks together, it averages out – everyone can get fast-ish service (here in the US, our service sucks, but this is a relative comparison), all on the same downstream drop from the phone or cable company.

    Netflix is breaking this. Apple and Google, also, too, is, but mostly Netflix. People stream 80-140K streams a lot now, all the time. To be clear, 140K is about what a T1 can do – 10 years ago, this cost anywhere between $700 and 2K a month, depending on location and service guarantees.

    In a way, we’re starving ourselves. Other countries can do new service. The US sucks at broadband, because we let telecom and cable run it. South Korea has 50Mbit to the curb, and it isn’t just beacuse of universal service that moms in rural places are still using modems (although it is amusing that telecom uses a subsidy to explain why they can’t do better.)

    The big deal isn’t about Glenn’s concern about latency, or how latency enables complicated handshakes. I regularly (like, more times a day than I can reasonably count) initiate complcated encrypted communication protocols to machines that are physically multiple-thousand miles away. We call it ssh, and it is stable, reliable and works fine for interactive use even when there’s a lag of 20-50ms or so, much of that being light speed, and not router overhead.

    The problem is how to deliver TV for 10 bucks a month, on a near-constant basis, to something like 200M people, on the back of a crappy network built out by a state firm that doesn’t want to be called a state firm, that would rather protect it than coerce it into actually getting along with things.

    Along the way, we get accidental billionaires who tell us that cable is the natural order of things, and suck it, proles.

Comments are closed.