Cellular carriers' competing claims as to what constitutes 4G (fourth-generation) cellular data networks got me to thinking about how speed is only one part of the story about why allegedly faster networks are being built. I've been writing about Wi-Fi since 2000, and that informs my thinking, because Wi-Fi has matured to a point where raw speed doesn't have the same marketing value it once did, because networks are generally fast enough. Instead, multiple properties come into play.
I want to talk about bandwidth, throughput, latency, and capacity, and how each of these items relates to one another.
Let me start all folksy with analogies. For simplicity's sake, let's consider a medium-sized city that serves water to all its residents through one central reservoir. The reservoir's capacity represents the total pool of water it can deliver at one time to residents through pipes of varying sizes and at different distances.
The diameter of the pipe, of course, determines how much water can pass from the reservoir to your particular tap. All of the pipes are from somewhat to very leaky, so the diameter only represents the potential water you could receive at any given time (under the same pressure), while the leaks reduce that. You receive your raw diameter's water after those leaks take their toll. For people who live far away from the reservoir, I turn the pressure way down, because too much water leaks out. Thus, they receive less water with taps wide open than their pipes' diameters would suggest.
In nearly every part of town, the pipes bringing water to neighborhoods have way too many smaller diameter pipes feeding homes and businesses. If everyone flushes his or her toilet at once, only a trickle of water flows out, albeit continuously. I'm cheap, too, in my thought experiment water system: I've brought very small diameter pipes into many areas so that the diameter of every home or building's feed is greater than that of the pipe that feeds a whole neighborhood. What a bastard.
Finally, I designed this city water supply in a peculiar fashion. In normal city systems, water is under pressure through the system all the way to your tap. When you open up a faucet, the pressure immediately pushes water out. The quantity is relative to the aperture of your faucet and the system's pressure. At 80 psi, you can get a shower head that sprays out one gallon per minute or two.
But in my system, your faucet's handle is directly and mechanically connected to a pipe far away from your home, at a water distribution point or back at the reservoir. Each time you turn off a faucet the valve at the far end is shut down and all the water drains out. When you need water, you have to wait for the faucet to open up that remote valve, and for the water to reach you from wherever it starts.
Because I'm a water nazi, my system has sensors that only let water flow if you have a receptacle being filled. Thus, if you want to fill up a gallon jug of water, you wait a moment for the water to flow, but it can fill quite fast. If you're having a big sit-down dinner for 100 people, my sensors turn off the water between each glass you fill. It can take a good 30 minutes for those 100 glasses. (Shouldn't you fill up pitchers instead? I'm not your mother.)
Good? Let's switch off the folksy bit.
Capacity. The reservoir represents the overall pool of bandwidth or capacity of the network. Even though it's constantly replenished, you can't take more water out at once than fits in the pool. Capacity is measured by how much raw bandwidth is available across part of the system or the whole, depending on what you want to manage. Picture central POPs (points of presence) in which fiber brings gigabits per second as your reservoirs, which have feeders out to water towers and smaller pools in the form of metropolitan Ethernet, ATM, T3s, T1s, and other technologies eventually reaching DSLAMs, cable head-ends, and cellular base stations.
Bandwidth. The diameter of a pipe corresponds to how much water may be delivered to a home; ditto, a given network pipe has a certain raw speed. For wired or wireless connections, that's dependent on the two sides of a transaction handshaking to agree on encodings that determine the speed, which, in turn are based on distance.
Throughput. After leaks, how much water arrives? Networks are designed for particular signaling rates for given encodings or protocols. Gigabit Ethernet may push a billion symbols a second through a wire, and 802.11n with three receive/transmit chains can gross 450 Mbps over the air. But bits used for framing packets, error correction, and other overhead can reduce that by a little with Ethernet and quite a lot with Wi-Fi, even when the Wi-Fi signal is in perfect isolation from interference. Throughput is your real data rate after overhead is subtracted: how long did it take for a 100 MB file to get from A to B multiplied by bits, divided by seconds?
Latency. This is the toughest one to explain in analogies or with actual technology. Latency is a measure of the round-trip signal, but may have little to do with the distance traversed, even if many thousands of miles. Rather, latency occurs at each hop along a packet's path. Slower links increase latency as do more links, and subtler system design. All of those contribute to taking longer to prime the pump–for water to start coming out of the faucet–when a system has high latency between two points or in the local link. Latency is most noticeable in two-way communications, like voice and video chat, because you wind up responding out of sync with other party (unless you say, "over"). But it also makes interactive Web sites seem non-responsive because they sluggishly react to your actions. Latency can cause jittery or lower-quality video if a device can't provide feedback to a streaming server to throttle down or open over very short periods of time.
2G and 3G cellular data networks, 802.11b and 802.11g Wi-Fi, and pre-switched (hub-only) Ethernet networks were all about getting enough capacity and bandwidth to deliver a usable amount of data to every part that wanted in on the action. With switching, Ethernet essentially creates a private pool for each port on the switch, allowing full-speed communication between ports without sharing the bandwidth with other devices on other ports. Neat, but not practical in wireless communications. (There's switched wireless, but it has more to do with managing dumb access points with smart switches than in adding capacity by established conflict-free channels.)
So-called 4G networks (which aren't 4G by the standards of the international body that sets such standards, the ITU-R, but, oh well) and the fastest 802.11n networks aren't designed to bring the maximum possible bandwidth to every party. Rather, there's the expectation that devices will be throttled well below what's possible most of the time to provide a greater reservoir of available bandwidth. That's partly why you see average ranges and peak speeds advertised. The peak speeds are often for sustained downloads, and are balanced against other local users' needs. Even if the bandwidth is available, you might be throttled to allow bursty uses by other parties.
With Wi-Fi, the three-radio 450 Mbps 802.11n hardware now on the market isn't intended to give every user 300 Mbps or more of throughput. Rather, the designs provide faster connections at greater distances, fill in dead areas, and allow more simultaneous users. The same is true of 600 Mbps 802.11n (not quite here yet), and the future 802.11ac 1 Gbps-or-faster update to Wi-Fi for 5 GHz. Better networks, not faster networks, is now the general goal.
Latency has been a particular focus in cellular networks. The voice component of 3G networks is designed separately from data in order to keep latency as low as possible. (It's all bits in 2G and 3G, but you can handle signaling, framing, priority, and other factors differently.) With 4G networks, the differentiation between voice and data is supposed to disappear; everything uses Internet protocol (IP), which means latency has to be locked down to prevent poor quality phone calls and jittery video.
Mobile network companies are trying to boast of new networks having high speeds, but speed isn't the bugbear of cell operators. WiMax, LTE, and HSPA+ networks are or are going to be much faster–but they're also going to be deeper and broader.
And, yes, I did just explain that the Internet is a series of pipes.
Image by Dann Stayskal via Creative Commons.