AT&T has decided to throttle users of it’s unlimited plans, and the unlimited wireless data plans are disappearing fast in the US.
Yet a recent study indicated that data caps are a crude and unfair tool for relieving congestion. The study recommends “policies honestly implemented to reduce bandwidth usage during peak hours should be based on better understanding of real usage patterns and should only consider customers’ behavior during these hours”
The problem isn’t how many bits people use. There is not a big bucket of bits that the carriers will run out of if everybody uses too much. The flashing lights on the routers don’t cost more to run either if people are moving their bits through the pipes in large numbers.
What is a problem is connection capacity. If too many people are hitting the towers, as I understand it, the towers have a hard time handling the traffic, and you get the poor wireless data service you see in some big cities and at tech conferences.
There’s also the good old fashioned flood of packets that cause increased packet loss as routers get overworked by too much traffic. That’s what makes DDOS attacks work. So there *are* problems, but limiting the amount of data I use at 3 AM when nobody else is using the Internet, doesn’t help the problem.
Throttling may help some because it knocks people into using a slower network with different capacities, but again it’s a brick bat to the head kind of solution. Sure, folks who use lots of data are more likely to be connecting at peak times, but it doesn’t mean they are, and it doesn’t mean they’re the root cause of the problem.
The situation reminds me of any kind of situation where a line or queue forms. Look at bridge tool booths or airport security lines for similar behavior. They can get horribly backed up, but the solution is not to somehow punish or throttle people who drive or fly often.
I suggest that carriers abandon data caps in favor of a ‘fast pass’ model. When the network reaches capacity or congested situation, all regular users get throttled a bit, unless they pay for a higher tier of service. That may sound bad at first, but remember, that right now, all users get throttled anyway in places like San Francisco, and the only option you have is to pay more to use your phone less. What if, instead you had the same service with the same issue you have now, for the same price but no data cap. However, you had the option to pay more per month to get your connection prioritized. You’re not violating net neutrality, because all users are connected, and all traffic is treated equally. You just don’t get throttled.
Of course the fast pass model requires pricing that makes it so that not everybody uses it. You want to avoid the situation you see sometimes on the Bay Bridge where the fastTrack lanes are backed up but the other lanes are not.
It’s possible they might even be able to provide tiers of fastpass where the more you pay the less likely you are to get throttled. And the throttling only happens at peak times. In non-peak hours everyone has full unthrottled access anyway.
I can already imagine some of you screaming why this is a horrible idea, so have at it, respectfully, in the comments. In the end maybe we can figure out some model that is agreeable to most, if not all? Who knows?
4 Responses to “Maybe carriers should take a cue from Bridges and Airports”
It’s good food for thought, but the queue analogy doesn’t work too well. During peak traffic times, you aren’t hurled off of the bridge or sent back home and asked to try your commute again (although that wouldn’t be a bad idea). A pay for performance model will only create a have and have-not divide or cause a false inflation in price for service.
Perhaps a better idea would be to come up with a feedback loop to the user showing current network congestion, similar to how signal strength is supposed to be shown on the mobile device. Higher network congestion == fewer bars and the owner of that device can decide how urgent the task is or at least have an understanding about the cause of delay.
Implementing this would require a sizable infrastructure update, I would guess. Both mobile devices and backend network equipment would need to be replaced or updated to support this.
Maybe not perfect.
I see what you’re saying, and that’s where the analogy breaks down. We can’t send just your wheel home because we don’t have room for it on the bridge. And the cost of doing so would be prohibitive anyway. Whereas in the Internet dropping a packet is nothing. So yeah, not a perfect analogy for sure and that may reveal some underlying issues with the proposal.
ON the more philosophical side, I really don’t think the fastpass model would create haves and have nots. right now we just have little and have less. This model would at least allow some haves. And price inflation usually ends up being self-corrected IMHO.
Thanks for the thoughts!
Why do we limit our expectations from carriers to responding to the explosion in wireless data usage by simply adding more infrastructure? Wouldn’t it be nice if there was an Apple in that industry, that had a visionary leader, who was forward thinking, and decided to innovate instead of just react. Imagine if they spent a portion of their profits on R&D, and bringing to market breakthroughs out of the labs we hear about.
Like the one of Stanford that could double wireless bandwidth:
And then Rice University:
Whether it’s the content industry or the communication network carriers, I’m convinced the answer is always innovation. And I think it’s time we start expecting it more and more from these legacy industries.
This is like one of those story problems you get in a critical thinking class — I like it.
If you stick with the bridge analogy where the fast pass drivers are more stuck than the rest, you can imagine ways around that; a second raised lane that stacks a second fast pass lane above the first, or maybe one above and below, effectively widening the traffic throughput without effecting the non-fast pass traffic.
The way that might work with data is if there was a second network that was reserved as an overflow carrier. Anyone who paid the fast pass fee would get bumped over there at peak times when their data throughput rates reached a low enough level. (Maybe a better analogy in this case is the gears on an automatic transmission; when the car revs up high enough, it kicks into a different gear.)
But this makes me wonder about Bram Cohen’s latest goal to “kill off television.” As I understand it, he’s trying to design torrent-based infrastructure where digital television is sent directly to your box via torrent, rather than directly from the networks, and the more people watching a show the faster you get it.
Could some similar kind of infrastructure be implemented for wireless data? I know it works for television because everyone’s getting and sharing the same bits, and no one is having the same wireless conversation. But there must be some duplication in wireless — web browsing, games, streaming, etc. If that kind of data could be shunted over into some kind of torrent-like network, where each wireless user was also sharing back some of the data being accessed to others nearby, would that help lighten the overal load?