The Fight For End-To-End: Part One 116
The summary provided by the conference organizers has a brief description of end-to-end:
"The "end-to-end argument" was proposed by network architects Jerome Saltzer, David Reed and David Clark in 1981 as a principle for allocating intelligence within a large scale computer network. It has since become a central principle of the Internet's design. End-to-end [e2e] counsels that "intelligence" in a network should be placed at its ends -- in applications -- while the network itself should remain as simple as is feasible, given the broad range of applications that the network might support."Another way to view end-to-end might be as a sort of network non-interference policy: all bits are created equal. The problem is that there are substantial economic incentives to treat bits differently, and these incentives are changing the architecture of the Internet in ways which may be detrimental to public values.
The workshop covered a number of areas:
- Voice over IP
- Network Security
- Quality of Service
- Content Caching
- Broadband
- Wireless
Jerome Saltzer started off with a technical overview of the end-to-end argument. In summary: digital technology builds systems of stunning complexity, and the way to manage this complexity is to modularize. For networking, this resulted in the layer model that many slashdot readers are familiar with. He suggested that designers should be wary of putting specific functions in lower layers, since all layers above must deal with that design decision. For a longer explanation, one can always read the original paper. If you've never heard of end-to-end before, I do suggest reading this paper before continuing. It's short.
First, Scott Bradner described two competing architectures for voice-over-IP protocols: one which employs central servers to direct and manage calls (the Media Gateway Control model, or Megaco), and one which puts most of the intelligence in the end-points, with the phones/computers originating the calls (the Session Initiation Protocol, or SIP). One important difference: SIP phones can use a central server to direct calls, but Megaco phones have no capability to act independently. Building a great deal of intelligence into the central servers is less end-to-end-compliant than building it into phones at the edges of the network.
One member of the audience pointed out that Federal law requires companies to build wiretapping capabilities into phone switches and wireless network equipment, and wondered how that would be implemented if the phones initiated the connections themselves (SIP). Traditional wiretapping is predicated upon the idea that there is a central server which all communications pass through. The panel candidly replied that when no central server is used and encryption is employed, wiretapping is difficult. One audience member pointed out that wiretapping at centralized switches is not the most effective way to do it, anyway -- since switches can be routed around and communications can be encrypted, the only truly effective way to wiretap would be to build tapping capabilities all the way at the edge of the network -- the phone itself. While some of the audience laughed, I think most of the participants also realized the dark undertones of this suggestion.
Next the discussion turned to innovation. In one model, the central servers would be controlled by companies with a vested interest in managing them conservatively, suppressing competition, etc. In the other, individuals would be able to create/control their own phones on the perimeter of the network, and the only barrier to innovation would be finding someone else to adopt your improvement as well so that the two of you could communicate. In the first model, innovations which benefited the company would be the only ones permitted. In the second one, any innovation which benefited the end-user would be possible.
Finally the discussion moved to a rarely thought about side effect of voice over IP. Universal service -- phone service to (nearly) every resident of the United States -- is funded through access charges on your phone bill. In effect, people in cheap-to-service areas are subsidizing those in expensive-to-service areas, ranging from the badlands of Nevada to wilderness areas of Alaska. From a societal point of view, ubiquitous access to telephones has been a great boon, but providing it requires a societal commitment -- otherwise people living outside of major population centers might never have phone service. Suppose now that traditional telephony is replaced by voice over IP, and no central servers are involved -- there would be no easy way to collect the access charges which subsidize outlying areas. While lowering such taxes may have widespread appeal, completely abandoning the commitment to universal service would be a great loss to society.
The next focus was network security. Firewalls are probably the most obvious breaks in the end-to-end paradigm -- after all, these devices' sole purpose is to stand in the way of network connections, and decide which are permitted and which are not. Participants brought up (but thankfully, quickly moved past) the true-but-useless point that if all operating systems were secured properly, there would be no need for firewalls.
Hans Kruse pointed out that if security must be implemented at the end anyway -- as it must if any incoming traffic is permitted through the firewall -- then there's no reason to do it at the center as well. David Clark put forth the useful distinction between mandatory and discretionary access controls -- mandatory controls being ones put into place by someone else, discretionary ones put into place by you. Discretionary controls do not violate end-to-end, but mandatory ones generally do. Michael Kleeman noted that the reasons firewalls are put into place include the desire to control the actions of users inside the firewall as often as the desire to control access from outside.Doug Van Houweling spoke regarding Network Address Translation (NAT). NAT allows two networks to be joined together, and is typically used to join a network of machines with non-routable IP addresses to the global internet. NAT is an outgrowth of the limited availability of IPv4 addresses, but is also employed in some cases as a poor man's security measure. Generally, Houweling described NAT as an affront to end-to-end, because any application which requires transparency of addresses breaks, making end-to-end encryption impossible. Added to which, applications sometimes transmit data in the TCP/IP headers which NAT alters. The group noted that NAT can be eliminated simply by putting more addresses into circulation. Later in the workshop, Andrew McLaughlin talked about the address allocation process for IPv6 and said that it is shaping up to be much better than that for IPv4.
The workshop moved on next to Quality of Service. QoS in this case covers a wide range of proposals (and a few working implementations) for selectively speeding up or slowing down network traffic -- a sort of nice for network data flows. The "benign" use of QoS is to ensure that traffic which is strongly time-sensitive like videoconferencing or telephony gets priority over the download of NT Service Pack 16. There are less-benign uses: Cisco's 1999 White Paper which encouraged cable Internet operators to use Cisco's QoS features to speed up access to proprietary (read: profitable) content while slowing down content from competitors was the red flag in the QoS realm, raising concerns about the role of ISPs in traffic delivery and abuses by telecom carriers which are also content providers.
This segment started with an overview of QoS. There are several ways to implement QoS on a network. The simplest is to build a network with a capacity great enough to never be maxed out; if the network has sufficient bandwidth, there's no need to worry about QoS in the first place. There are costs, though, to maintain sufficient excess capacity on the network. This is called "adequate provisioning" if it is your preferred method of managing traffic, or "over-provisioning" if you prefer one of the other QoS approaches. The other ways under consideration are an integrated service architecture and a differentiated service architecture. The former would monitor and track each individual data flow -- the call you place to your mother in Singapore could be treated differently from the call you place to your grandmother in Kracow. The latter would only allow differentiation between classes of services -- all videoconferencing would be treated similarly, for example. Of the three, adequate provisioning is fully end-to-end while DiffServ is less so, and IntServ is highly non-compliant.
Jerome Saltzer (from the audience) made the point that no QoS technique provides real guarantees of service, and any technique except having plenty of excess bandwidth available violates the principles of end-to-end. He emphasized that people should be aware of the trade-offs.
Jamie Love mentioned not only the Cisco white paper but pointed out that this situation lent itself to behavior like that which has landed Microsoft in hot water -- using one's control of a particular system to speed up one's own content and impede competitors' from flowing. A member of the audience countered QoS would allow companies to create different levels of service -- pay more for fast access, less for slow access -- and that this was a good thing.
There were two distinct classes of problems identified. The first is similar to the distinction among methods for carrying voice over IP: the companies that control the QoS-enabled servers get to control who gets to innovate in QoS-related areas. The second, related problem is that of carriers using QoS features to promote their own content. The second problem has traditionally been solved by requiring a separation of carriage and content -- keeping the owner of the lines and the provider of content over those lines separate. The current FCC and FTC are not enforcing that traditional check against monopolization of content in telecommunications; thus it's likely that unless governmental policies change, AOL/Time Warner will be a position to promote its own content through control of the cable Internet services it owns.
Doug Van Houweling then spoke and noted that the Internet2 project is taking a very strong stance promoting QoS, because that stance is seen as necessary to promote investment in Internet2 architecture.
An audience member spoke up and suggested that the best regulatory course would be regulation with a light touch -- regulation could provide the minimum necessary controls to provide really necessary QoS while disallowing abusive uses. At this point Deborah Lathen asked the $64,000 question: how would the FCC make this fine regulatory distinction? No one had a good answer to that question.
In Part two tomorrow: transparent caching, broadband and wireless access, and capitalism.
Server reliability & performance (Score:1)
Impacts IDS (Intrusion Detection Systems) too (Score:1)
Recently, security lists (like Bugtraq [securityfocus.com]) have been getting more and more traffic about "attacks" that turn out to be one proprietary outfit or another pumping out packets to "map" the Net. (Screw the admin who is trying to figure out what the hell they're up to; some of them wouldn't tell, even when asked directly.)
How do you get this sort of information in an e2e environment? Shouldn't there be a less wasteful way to determine this sort of thing?
Re:Yawn, QoS (Score:2)
True enough! People buying ISP service can understand things like bandwidth (of course), reliability/uptime, and even peering quality much better than complex priority protocols that require administration. So the marketers focus on that, and QoS withers on the vine.
Re:bandwidth (Score:1)
Paraphrased: There is no such thing as enough bandwidth.
That is incorrect. The most bandwidth I can use is limited by my disk subsystem, CPU, my input devices (including video camera), and the number of individuals who want to see my content all at one time.
Remember, there *is* such a thing as too much porn.
Great point. (Score:2)
Re:Obligatory Napster reference (Score:2)
Re:More government regulation, great! (Score:3)
This has to be one of the most asinine conclusions I've ever seen. Did you even read the rest of the Britannica entry? Specifically, the part you left out in the middle of your quote. I assume you did, since it tells a very different story than the selective quote you provided.
For those who are too lazy to read for themselves, here's the part tylerh left out:
Vail was brought back into the company as president in 1907, and from then until his retirement in 1919 he molded AT&T into virtually the organization that lasted until 1984. Vail set about trying to achieve a monopoly for AT&T over the American telecommunications industry. He consolidated the Bell associated companies into state and regional organizations, acquired many previously independent companies, and achieved control over Western Union in 1910.
In other words, AT&T used standard laissez faire capitalist techniques to achieve their monopoly. But what about the Graham-Willis Act of 1921? It didn't create the monopoly; ATT had done that themselves. Do a google search on it, and you'll find that (essentially) all it did was exempt telcos from the Sherman Antitrust Act.
So what's the moral of the story? Aside from the fact that tylerh has no problem using selective quoting to deceive the lazy, what we see is that ATT basically bought a law that gave them (temporary) immunity from antitrust prosecution. The gov't didn't create ATT, or do anything to create it's monopoly. They simply refused to use federal law to restrain ATT in 1921. To imply that ATT used a gov't granted monopoly to put down their "growing competition" is sheer intellectual dishonesty.
No need for firewalls? (Score:1)
Not true. Without an IP Masquerading firewall, I'd have to have a separate Internet IP for every networked device on my home lan. This is expensive and stupid. NAT is a good thing. If a machine doesn't offer a service, why does it need to be on the raw internet? Even if it does, I still might not want that service available to the whole world. Easier to set that up once on a firewall than on every single box I have to manage, wouldn't you agree?
Would you really want the ability to program your VCR available to the entire world? Appliances with web GUI's are coming, and firewalling them off is a good thing. Why waste routable addresses on the 'net if you don't have to? Why encrypt and authenticate things (VCR) when you don't have to?
Re:More government regulation, great! (Score:1)
Odd, its not having that affect in PA...
Re:i want higher priority (Score:2)
A QOS that makes sense for end-to-end (Score:2)
If the user marked all traffic as high, it would simply have the same result as marking it all low or medium - the total availiabe bandwidth would simply be spread evenly for the applications using it.
Re:No, think about it... (Score:1)
Re:A QOS that makes sense for end-to-end (Score:1)
Theoretically, by doing so you could simply add up the bandwidth given to your endpoints and purchase that as your upstream connection. That provider could do the same thing, et cetera.
Re:Correction (Score:1)
I'm not sure that I agree with your reasoning, although it does appear that the original comment was referring to South Dakota.
Taking your example, you can say "I went to the mall" and it means the mall in your town. If you say "I went to the Mall", then it means you went to that monster one in Minnesota (or, depending on context, maybe the big parklike area in downtown Washington, DC). So even though the capitalized, proper version of the noun means one specific place, the non-capitalized but specific use of the term means something like "the closest example of a mall".
So if there were bad lands in your state, you could say "We went through the badlands" and not be talking about South Dakota. You could even say "We went through the badlands of Nevada", if Nevada had any badlands of course. It's only when you say "We went through the Badlands of Nevada" that you are definitely incorrect.
Yes, I don't want to get back to work, why do you ask?
NAT isn't just about tight address space (Score:1)
Ick! not quite. NAT also lets users squeeze several machines through one IPaddr from stingy ISPs. ISPs would have to be giving away multiple IP addresses to users before NAT disappears. It's possible the huge IPv6 space will trigger the giveaway, but I don't see it as a given.
Re:More government regulation, great! (Score:1)
And where does this power come from? In many cases (I won't go so far as to say all cases) it comes from Government, Inc. The DMCA is an obvious example.
Government regulation of things like power and cable TV also seems to always mean a Government granted monopoly, leaving one company exclusive access to a bunch of consumers to pull money from, until they are wealthy enough to bully small competitors, even if their official monopoly is repealed.
Unfortunately, since this [the monopoly-caused imbalance of power] has already happened, getting Government inc. out of businesses they shouldn't have meddled with in the first place without leaving a nasty power imbalance between the former monopoly and the small potential competitors is going to be really tricky business...
A vote for the lesser of two evils is still a vote for Evil.
Re:QoS Bad (well, not necessarily in offices) (Score:1)
Another good application of this is backups over a network. You don't want backups (which are a pig for bandwidth, but not time-sensitive) having a negative impact on your production network apps, but at the same time segmenting backups onto a separate network means a lot of wasted resources. QoS is a decent solution for this, particularly on small LANs and systems where you can't afford to have a separate network for your backups.
I think most users' primary objection to this is that good applications of the technology will get lost alongside evil applications of it--your Internet access is restricted to 300 bytes/second because the CEO or the resident BOFH has QoS'ed everybody else down to nothing to protect their own Internet FPS games. But really the problem here is bad sysadmins and bosses, not bad technology.
The DoS-via-QoS question is an interesting one, though. It's a kink that will have to be worked out before QoS can really be viable, at least in environments where your Internet service is mission-critical.
e2e by definition (Score:1)
This isn't exactly a useful comment, of course, but I just thought it was interesting - uber-e2e.
Napster centralized? Yes and no. (Score:2)
Microsoft Windows monopoly? (Score:2)
The gub'mint didn't create the Windows "monopoly". It was a natural effect of MASSIVE NETWORK EFFECTS. Windows obviously (still) has competition.
The phone system is a prime candidate for even stronger NETWORKS EFFECTS. Imagine telco Foo has 51% market share and telco Bar has 49% market share and their systems are islands. Eventually most people will switch to the telco that allows them to connect with more people. It's not necessarily a quick, clean process, though.
Re:assumptions about e2e (Score:1)
ah, good old "proof by lack of imagination". think about it. if every device (where device is some set of my toaster, TV, socks, refrigerator, etc.) has an IP address, and is connected to the network, that leads to way more infrastructure, too. more nodes for management. it's a worse-than-linear increase. but given that, the numbers are still pretty generous. bringing us to your assertion: to which i agree entirely. another reason e2e doesn't make sense. i should be able to design my network any way i like, you should be able to design yours any way you like. it should then be up to us to provide for inter-connectivity. it's ironic that, in the internet culture dominated by talk about de-centrilization of power and no single "root", ICANN and IANA still define what pretty much everyone can and can't do. to say nothing of the DNS mess. those I* folks have way to much to say about how i run my network.
you are mistaken. unless you take a really, really broad definition of "campus". try dealing with such things for multinational corporations. ones that used to be the telephone company. i'm well aware of the issues surrounding NAT today. but i believe that's because they're trying to work within an e2e network, rather than being able to admit e2e isn't always the way to go.
completely irrelevant aside (Score:2)
> never got that far.
i dont believe the above statement is true. standard oil consistently maintained 90% of marketshare, i think. and the only reason standard oil at the height of their power never got more than that was entirely for pr reasons. he couldve easily crushed them- among numerous other advantages, he negotiated nasty deals with the railroads that not only gave him huge discounts but actually had the railroads pay him every time they shipped oil for someone else.
rockefeller thought that a small amount of token "competition" would assuage public opinion and make it harder for him to be prosecuted (which was true for a while, but ultimately didnt save him). he intentionally kept prices at a level high enough to allow his few remaining competitors to survive. there are recorded instances where he told his subordinates to back off from a potential kill- once he completely dominated the market, he felt no need to exterminate those few left.
so its not like he wasnt as powerful as at&t or microsoft- he simply never fully exercised his control because he was afraid of a public backlash, and didnt think he needed more than 90% anyway. (the lawsuit was actually spurred in part by the actions of his successor, who was considerably more ruthless in the market and less able to keep up moral appearances).
unc_
Re:No need for firewalls? (Score:1)
Once IPv6 opens up the address space, I think whatever remaining legitimate reasons exist for using NAT will quickly melt away.
power deregulation (Score:2)
the problem with electricity in the usa is that it has a history of being very, VERY heavily regulated with extremely restrictive price controls, is very sensitive to political/pulic opinion but at the same time requires massive outlays in capital expenditures.
its a recipe for complete disaster. there are two problems with power in the us:
1. power generation- nobody wants a power plant in their neighborhood. theyre too worried about nukes, air pollution, or loss of their whitewater rafting. it is very difficult (both in time and money) to build a new facility. political/public backlashes against building new generators has created artificial scarcity despite massive improvements in technology. hence, america does not have enough power-generating capacity to meet its future needs- prices will go up, no doubt about that. they need to in order to compensate a company for all the shit they have to go through to open up a new plant.
2. the power grid- everyone's plants are connected together, so there is a shared responsibility for maintaining the power grid. quite naturally, no single company wants to pay to repair/increase power grid capacity, because theres no way in the system to charge in terms of where electrons are flowing through the network. and gov't isnt too eager to foot the bill either, so the grid is falling apart. the lack of accountability for power grid maintenance is why electricity should never have been deregulated.
the problems in ca are actually more related to the power grid. theyre afraid theyll blow the entire system by trying to push too much power through it, which would of course black out most of california and probably a significant part of the west coast.
pick a different example for corporate abuse of power. the power companies are, ironically, relatively powerless because of their situation.
unc_
Re:assumptions about e2e (Score:1)
My comment about the raw numbers involved in ipv4 vs ipv6 are intended to show that the raw numbers are pretty meaningless. ipv6 makes provisions for site-local addresses and ipv4 compat address space. I'm assuming that most ip-enabled bake-ovens will go in site-local, maybe with a public facing control system.
your arguement might hold up for DNS, but not routing. when people talk about dumb vs. inteligent networks, do you think they're talking about the wires? it's routers, switches and other network equipment that have that "inteligence" or not. and they clearly have some - like the routing info.
When routers and switches are moving packets they are dumb, route lookup could be handled by a spring-driven mechanical switch. Populating the routing table is a distinct function from forwarding a packet. Calulating a spanning-tree is not frame switching. These functions can be performed on systems other than the router or switch doing the forwarding.
to which i agree entirely. another reason e2e doesn't make sense. i should be able to design my network any way i like, you should be able to design yours any way you like. it should then be up to us to provide for inter-connectivity.
You can set up your network in any way you see fit. Just follow the standards and no one gets hurt. Routers and switches mangling packets and lying to the other end of the connection will often break those standards. Troubleshooting such problems will make you look like a stud and teach you to curse like a sailor, but there is nothing else to recomend them.
When those standards are broken by the an intermediate system deciding to snoop higher layers, then there is a fault at that system. It is not the duty of designers of the widget control protocol to learn how to interoperate with every router, switch, or firewall out there.
End-to-End requirements are not some holy thing that ip geeks love because some standards body says to. People argue for it because it works. e2e purity is not the point, but gratuitous incompatability often result when it is broken without extreme care.
--
Re:A QOS that makes sense for end-to-end (Score:2)
Thus, as I said it doesn't matter how you fiddle with your priorities as you are only affecting your OWN traffic, not that of others! If you want to do voice over IP then you better make sure you've paid for the minimum bandwith to provide that service.
Re:A QOS that makes sense for end-to-end (Score:1)
Re:"Society" benefits? (Score:2)
Now that's a remarkably farsighted view for the techno futurists at /dot.
Now imagine this:
What if Joe Q. Example wants to live in a rural agricultural community far out in the Nebraska plains and doesn't want to go Amish or to live in tech central, but to be a part of a mainstream free society that is free from the 'convenience' of global integration and techno modernism? In a capitalist society, few rural communities would have more than a handfull of phone lines or internet connections, and those would probably be in a central location like the county seat library.
I know of a remote rural community in AZ near the Sonora border where Maryjane is cheap from 'international entrepreneurs', organic vegetables are a viable industry, open space and mountain views are easy to come by, and land is cheap. It's just about 40 miles from this close knit community to a major city, too. But nobody moves there, because there is no easy access to subsidized phone lines or publicly subsidized water works. Furthermore, the highway is long and roundabout and takes hours to travel because it's not as heavily subsidized as an interstate.
It's a wonderful place and it would die promptly along with its close knit, open, unique, and worldly culture if 'universal service' and the other destructive forces of socialist globalism made their grubby way in there.
Land would soon be too expensive for homesteading immigrants and native children would be forced to move to the city to earn a living. Rich yuppie second homes would start crowding the landscape and the colonial town center would be redone. The two famous writers who live in the area would escape from scrutiny by living alone or finding another community where they could live as a part of a community and not a celebrity.
But that would have to be outside the USA, as the socialist menace of 'universal service' and various other leveling schemes carry us down into fascism.
Correction (Score:1)
Re:bandwidth (Score:1)
Re:For an example (Score:1)
Well then, don't make it so darn enticing to do so
Re:Yawn, QoS (Score:1)
Basically the streaming of any live data requires priority. And as the net is used for more live-data applications, priority will become more of an issue.
Re:Correction (Score:1)
If you don't capitalize "badlands", then it is no longer a proper noun, but just a generic term referring to bad lands. Any region can have bad areas of land, so it is quite possible (though I've never been there to see) that there are badlands in Nevada, but definitely no Badlands in Nevada.
I'm jumping the gun, but why is any of this even? (Score:2)
Why is it an issue?
Why does regulation even have to come up? I mean...
If I take my little network, and hook it to yours, in a completely private deal, who is the government to regulate waht we can and cannot send? At what point then, as this grows, does it suddenly become regulatable?
If the public at large want regulation, they can pay for their own network.
Re:More government regulation, great! (Score:2)
Re:No, think about it... (Score:1)
You can easily to fair-queuing without priorities.
But wait you say, I'm more important than that guy -- I don't want fair-queuing, I want me-first-damnit! (even if you don't need it)
Well, that's were you need priorities, but priorities without costs are useless because just about everybody thinks they are King of the World(tm).
No, think about it... (Score:1)
It's like the freeway, you don't need the stoplights on the ramps unless there is insufficient capacity. Of course, roads and networks suffer the same fate -- building more capacity only attracts more use -- and in general the end users are (unfortunately) buffered from the true costs (which leads to higher use -- how much gas would you buy at $15/gallon?).
The same goes with many networks -- we have students sending out 100 GB/day from their dorm rooms because they are buffered from the true costs (I paid my dorm bill - I deserve this!)
But in general, with networking it has been "cheap enough" to simply keep piling on the bandwidth -- perhaps someday Moore will give out and we'll have to work smarter not harder.
"Society" benefits? (Score:2)
This comment, in my opinion, is a little bit excessive. At the risk of sounding a bit callous, I have to ask (honest question, now, no flamebait intended):
If Joe Q. Example decides to live out in the middle of the Nevada desert where there are no utilities, how, exactly, does society benefit by paying to give him a phone hookup? Does someone antisocial enough to want to live far, far away from society in general really benefit that society by having a phone connection to it?
I realize that's a bit of an extreme example, but at least it illustrates the point. Comments? Clarifications? "Shut the heck up you idiot"'s? :-)
A vote for the lesser of two evils is still a vote for Evil.
Re:QoS Bad (Score:1)
You seem very focused on the web. You mention the web several times in your post. Keep in mind that there are other reasons to send packets across the internet besides viewing web pages.
As for VoIP through a central server, one signifcant advantage I can see is anonymity. If connections are made through a central server, it's possible to conceal the IP addresses of the participants from each other. Of course, it could be abused in a number of ways, too, but I just wanted to point out that there was, indeed, at least one clear reason why routing VoIP through a central server would be a good thing.
And finally, end-to-end is becoming increasingly less viable due to firewalls and NAT. Case in point: Napster. If two Napster users are behind firewalls or NAT routers that prohibit incoming connections, they can't exchange files with each other. This is another argument for routing traffic through central servers, too - it eliminates this problem. With NAT and firewalls becoming more commonplace, and the internet being used for more and more varied applications, I think now is a great time to start looking at alternatives to the end-to-end approach.
Re:More government regulation, great! (Score:3)
kali,
You are correct that my Brittanica link did not fully support my conclusion. Slashdotters are not reknowned for their long attention spans, so I keep my posts brief.
Since you've read this far, let me take more of your time and tell a fuller story. It's more intersting. Sadly, I don't have supporting links handy
As Kali points out, ATT reached for monopoly via "standard laissez faire ." If memory serves, they had about a 70-80% market share when, as Kali correctly points out, they bought themselves anti-trust protection and stomped the rest of their market. While we'll never know, many (myself included) doubt that ATT would ever have reached their ultimate 95%+ penetration without government help. Microsoft seems stuck in the 90-95% range, and even Standard Oil never got that far.
Back on topic, Kali is missing a key fact. Consistent with " standard laissez faire," the ATT "monopoly" was already under attack. Ever wonder why security companies get their bare copper provided to them by the telcos on the cheap? ah, there is a tale. By the 1910s, Private alarm companies had sprung up that were laying their own copper. ATT realized that this was a competing infrastructure, so they cut a deal: we'll give you our copper cheap, you stay out of voice. This was formalized in the rate tariffs as the government set concrete on ATTs "monopoly." And there matters stayed, until DSL pioneers staring ordering bare copper "security alarm" lines for their data networks. Nasty lawsuits/hearings ensued, where this juicy history turned up.
... and if mine was the "one of the most asinine conclusions [you]'ve ever seen," kali you are clearly new to slashdot.
Re:Obligatory Napster reference (Score:1)
So you don't hold out too much hope for this new-fangled "In-ter-net" thing, do you? Client-server is end-to-end; you have two machines at the end of a dumb pair of copper wires, a dumb piece of fiber, or maybe some dumb RF. Maybe you need routers or switches in between, but those are made just "smart" enough to do their jobs and no smarter.
This reminds me of an article that was posted to /. a while bag in praise of dumb fiber rather than the "smart networks" that the QoS folks/telcos continue to push. Here's the link. [upenn.edu]
I hate to do this, but MOD THIS UP! (Score:1)
Jay (=
PDF! (Score:1)
Am I the only one out there who hates reading things in pdf format? Can't they post things in simple html (or even complex html)? I would rather read things in some proprietary format such as M$ word than in crappy pdf.
</rant>
Re:More government regulation, great! (Score:1)
Don't forget what AT&T offered in exchange for its monopoly: universal service. That was an exceedingly good deal, all told. It only had to be undone when AT&T got too big for its britches and began to actively suppress competitors while not improving its services; classic behavior of a monopoly.
Re:Correction (Score:1)
And if you said we went to the great mall in Nevada, you would be incorrect - as the great mall is in minnesota. While California does have a "the great mall" as well, you could even say "the great mall of California" because it is still a place. When you use 'the' in front of a name of something, you are giving it a specific instance. Your example of the closest example of a * would still serve the case, as the closest example of badlands would probably be in arizona or utah (depending upon your location in nevada) - as nevada doesn't even have any inspecific badlands. All nevada has is deserts really. (Using the badlands definition of Barren land characterized by roughly eroded ridges, peaks, and mesas.)
I dont want to work either.
Re:More government regulation, great! (Score:1)
Stuart Eichert
Re:NAT IPv6 and Security... (Score:2)
NAT will not save you if you are running an insecure server, but for insecure clients, it will save you from hackers trying to create a connection to the client. At least that is true if the NAT is properly set up.
Re:A QOS that makes sense for end-to-end (Score:1)
Honestly, what's going to keep me from fiddling my connections (at either end) to do ftp-over-voice-over-ip in order to keep my priority high? After all, compressed, encrypted voice looks a lot like compressed, encrypted programs. QoS will only work if all players are trusted. As far as I can see, QoS will only let us download all that pr0n faster (so bring it on! :).
Re:More government regulation, great! (Score:1)
jitter? pah! Remember Cheshire's Law (Score:1)
QoS is bogus because of Cheshire's Law [stanford.edu] - for every network service there is a corresponding disservice.
Nothing in networking comes for free, which is a fact many people seem to forget. Any time a network technology offers "guaranteed reliability" and other similar properties, you should ask what it is going to cost you, because it is going to cost you. It may cost you in terms of money, in terms of lower throughput (bandwidth), or in terms of higher delay, but one way or another it is going to cost you something. Nothing comes for free.
Re:More government regulation, great! (Score:1)
NAT IPv6 and Security... (Score:1)
Re:More government regulation, great! (Score:2)
If a bunch of bureacratic slimy corporations do not provide what I am looking for I can buy something from a small business.
How right you are. Indeed in the PC software business (with no gov't regulation) when you've been hit by a macro virus and dangerous executable email attachment, and you've had it with that slimey corporation's lack of security features, and you're frustrated with their continued denial that they have nothing wrong in the design of their product, the free market forces are ready to serve. Just go out any buy one of those many competing (viable) office and email/contact software packages!
Re:A QOS that makes sense for end-to-end (Score:2)
The idea is that you pay your ISP for a right to send at a certain nominal bandwidth. If you send faster, your packets will be marked with lower priority. At the network's core routers the lower priority packets will be dropped if congestion occurs.
This idea is called SIMA. You can learn more about it at http://www-nrc.nokia.com/sima/ [nokia.com].
(I have no relation to them. I just know these guys and really think they've come up with something useful.)
For an example (Score:2)
Re:NAT IPv6 and Security... (Score:1)
The end-to-end idea sounds decent enough... (Score:1)
Ugh. To be mooned by seven animals on an innocent webpage. I certainly won't be saying nice things to Zelerate anytime soon.
More government regulation, great! (Score:2)
Stuart Eichert
Translation (Score:1)
Obligatory Napster reference (Score:1)
The hard part of end to end is changing any design component that is found to be inadequate, like Gnutella's scaling problems. Rather than changing relatively few servers you would need to push updates out to a huge number of smaller components, be they phones or PCs. Regardless, the long term benefits of a decentralized architecture, which is what end to end used to be called I think, probably outweight the negatives.
Re:QoS Bad (Score:1)
Re:Correction (Score:1)
You can't say we went to the badlands, unless you are talking about he ones in South Dakota - other wise you just went to badlands.
Re:Questions Questions (Score:2)
assumptions about e2e (Score:2)
i know i'm risking flaming here, as e2e is something of a holy cow on the internet. but bear with me. there's two ways of looking at it: internet-centric or network design in general. let's look at it first from an internet-centric point of view first.
the basis of e2e is that all the "inteligence" is pushed out to the edges. well, upon examination, this doesn't hold up in the practice of the internet. please raise your hand if you've got the entire internet routing table on your host. or the full hostname->ip mapping for the internet. no? that's right, that info (inteligence) comes from the network - switches, routers, DNS servers, and the like. a common counter is "DNS isn't an application" but to the network, it is. it's just transferring bits around. so's routing. but the inteligence that makes it work lives in the network, not on the edge (my host).
now look at it from a general network design, non-internet-centric point of view. from the facilities point of view, the only application the "network" knows about is setting up and tearing down data paths. whether it's a TCP stream, a UDP datagram, ATM connection, or a phone call - the "network" delivers data in defined ways. while QoS can be described as a (potentially problematic) upgrade to this service, NAT is not inherently anything more than a breakdown of the DNS system (although, as currently implemented, it introduces more problems). it's compensating for a failure of IP: the failure to allow real heirarchical address management. IPv6 doesn't solve this, it just pushes the problem farther out into the future. if, as many people have suggested, my toaster, car, telephone, TV, and socks will all have IP addresses one day, even the new addresses provided by IPv6 won't last very long.
so what to do? on the internet today, DNS first resolves a name into a number. it's the numbers that get things done; it's the numbers that're important. imagine a network where the names were what was important; where i can define, for example, myco/us/ny/ny/10/107 as the 107th terminal on the 10th floor of the NY, NY office of my company. that's a crappy naming system, but the point is that in a world where the names are the key, i can have much better control over the structure of my network and addresses. it also simplifies routing and such, as the area of responsability becomes much clearer.it's a direct violation of the e2e model, but so what.
oh, and before you say "it can't be done, you need numbers" or whatever, it's already been done. several times. read up on Datakit, a network architecture developed by Bell Labs way back when. it had other problems, like static routing, but it got a lot of things right that the Internet's still fumbling with.
again from the network design point of view, answer this question. which is easier to upgrade: 1 server or 100,000 clients? imagine if every time AT&T wanted to change your billing rate or add the capacity for three-way calling they had to upgrade your telephone. and everyone else's telephone. by concentrating the inteligence (and, therefore, the complexity) the telephone network has become the most succesfull network in the world, with more endpoints than the Internet, and way more users. concentrating the complexity means less overall work, less chance for something to fail, and lower barriers to innovation.
that being said, the business model of the Bell system has introduced other barriers to third-party innovation and service, but those are business issues, not technical ones. e2e results in increased overall complexity and additional problems in the name of distributing it away from any central point. sounds like a bad trade to me.
Re:A QOS that makes sense for end-to-end (Score:1)
Plus, on a larger scale, it would be nice if routers honored the QoS (or just the ToS) bits for that matter, so interactive apps could win out over batch-type apps.
Re:No, think about it... (Score:1)
let's say my router can handle gigabit traffic, but i'm only using about 200 megabits/s. i don't want my 1 megabit/s of high-priority stuff to have to get plunked out of the router *after* the other 199 megabits that came before it are routed. i want higher priority. that's where it makes the real difference, and can make a serious difference in media-intensive applications.
jon
Scalability (Score:4)
The problem is that folks are trying to put services (like voice) that need realtime delivery onto a network that wasn't designed for it. In a straight IP based network, each router only needs resources for the packets that it is currently handling. As soon as a packet gets sent, the router's interest in it ends. A packet can (theoretically!) take any route at all through the network, and it's the endpoint's responsibility to put everything back together.
Anything else requires additional resources for each connection going through the router. For a backbone router, this is a *lot* of connections. It also means that each connection is "nailed" to a single route through the network. Lose a router and you not only lose the packets that it is storing at the time, but all the connections that it is handling. There are ways of handling this, of course, but the solutions are expensive, in terms of both hardware and bandwidth.
In my somewhat cynical opinion, what the providers want to do is take the simple "flat rate" model that the Internet is built on and turn it into what Scott Adams calls a "confusopoly", where the customer is never sure what services she is getting or what they're supposed to cost.
Combine this with the Government's desire (all governments) to monitor and control all communications, and you have the recipe for a real mess.
--
Re:QoS Bad (Score:1)
QOS can also be a good solution for controlling internet access from a corporate network. On a lan with 2000 users, even 10% of them listening to streaming Real-Audio radio stations can put a pretty serious drain on most internet connections. The traditional way of dealing with this is to block Real-Audio, Napster, anything else fun... on the firewall. I vastly prefer the idea of using QOS (when possible) to give priority to business essential traffic, while allowing any left-over bandwidth to be used for "non-essential" internet usage. It's a win-win. Employees get the "perk" of being able to continue to use thier instant messagers, Napster, Real-Audio, etc. But the actual business uses that the internet connection is *for* are not negatively impacted.
Re:I'm jumping the gun, but why is any of this eve (Score:4)
If you take your little network and hook it to another little network, no one cares. Now, let's suppose that you are a big network; in fact, you are the monopoly cable provider in your town. Now, let's suppose you like Fred Foo for mayor, because he helped you arrange your monopoly, and you hate Bart Barr, because he's trying to get a competing cable franchise established. So you decide to give the Foo campaign high QoS and 300 bps to the Barr campaign. Still no reason to regulate?
Or, to take a more realistic example: you have a cable monopoly and you own a movie studio. You provide high QoS jitter-free streaming interactive movies to your cable modem customers -- but only movies owned by your studio. Competitors can only use your generic, bursty service, with lots of packet retransmissions and brief outages. Customers can use DSL instead of a cable modem, but the local phone company, which controls all DSL traffic, has made a deal with a different movie studio, so if you want to watch someone else's movies you're still hosed. You can try wireless IP, but there's not enough available bandwidth and too much interference.
Long ago, the feds made a very wise decision: they forced the major studios to sell their theaters. In the old days if you were in a small town you might only be able to get movies produced by the studio that owned your local theater. Content and distribution need to be kept separate, by law if need be.
QoS and direct connections (Score:3)
QoS is actually used in a large portion of the backbone, but not at the IP layer.
For example, Sprint uses the same network for their digital voice (PCS, long distance). This is a big SONET backbone tied to OCx ATM networks. From there they branch to voice or data.
For IP data networks, it flies over the OCx networks just the same as voice, but voice has QoS applied to its virtual connections via ATM AAL2. IP data traffic is usually AAL5 with no QoS.
Also, many of the backbone IP providers (sprint, UUnet) use QoS/traffic shaping at the entry point for small ISP's to ensure that traffic from big fish like sprint or UUnet or AOL? gets better response.
You may remember an article about big data providers (UUnet and sprint specifically) giving crappy data service to ISP's and affecting their ability to compete or provide relaible services.
At any rate, the point of this is that currently QoS is used but internal to the backbone carriers themselves. It is definately nice to have, and allows them to implement all sorts of latency intolerant services like voice and video over their networks which cannot be implemented without QoS.
It will take a lot of effort to get QoS at the IP layer, as this will entail paying ISP's for a QoS connection, probably ATM, and running IP over that connection, or fundamentally altering the IP protocol to include QoS capabilities similar to those provided by ATM. The latter will not happen
Re:More government regulation, great! (Score:3)
Secondly, corporations abuse power. They help their friends and burn their enemies, with the consumer left as the meat in the sandwich. Bureaucracy is bureaucracy, private or public. Absolutely. I in no way disagree with you. The fundamental difference is that by law I do not have to buy a particular company's products. I have to abide by whatever laws the government sets. If a bunch of bureacratic slimy corporations do not provide what I am looking for I can buy something from a small business. The most disgusting things that corporations can do is to basically make themselves a part of the government by lobbying them to influence legislation that interferes with the free market. Remember that corporations and their CEOs are not necessarily interested in the free market.
Stuart Eichert
Theoretically, yes... (Score:2)
In theory, yes. In practice though, ISP's will still try to get a few more dollars per month out of you just for changing a few entries in their database. I think the logic goes something like:
--
Re:A QOS that makes sense for end-to-end (Score:2)
Re:A QOS that makes sense for end-to-end (Score:1)
You would need to in some way change things so that no user can gain an advantage over the others. I don't know how you whould do that...
Re:Translation (Score:2)
re-examine your assumptions (Score:2)
Why do you need a map of the Net? One of the basic ideas of the Net is that it works using only local information.
Redefinition (Score:1)
Re:No need for firewalls? (Score:2)
Likewise, if you can do encryption and authentication for free, then turning it off costs more than just leaving it on.
You have a good point about administration, although if you rely only on perimeter security then you're screwed if the perimeter is ever breached.
Why is NAT bad? (Score:2)
Could someone explain this a little more? I use ssh through NAT gateways all the time. E2e encryption sure seems to work fine. I suspect I'm missing the point...?
So what?
---
Re:"Society" benefits? (Score:2)
also, keep in mind that the real issue is "hard to service areas" not "people i think are useless". it's far more difficult to run lines into the appalachian mountains than down the block in Manhattan. yet interesting people or organizations can be found in both places. the diversity ubiquitous telephone access provides society is far and away worth the small cost.
Re:QoS and direct connections (Score:2)
While they do work, they are not ubiquitous on the internet, and that is what I meant by QoS for IP. A complete and total QoS solution without need for special TCP/IP stacks and applications.
Re:More government regulation, great! (Score:2)
Or did the gov't just say they would not prosecute AT&T for anti-trust - in which case it would not be profitable for anyone else to try to compete (which is a different problem than to make any attempt to do so a crime).
Re:completely irrelevant aside (Score:2)
Thanks for your good post. I basically agree with you. In several crucial markets, particuarly US Domestic distribution, Standard Oil (SO) did achieve effective total monopoly. A couple of caveats: The SpindleTop discovery ended SO's stranglehold on crude production long before the courts broke up SO, and by the turn of the century, the relevant market for petroluem was worldwide, not coutnry by country. And here, SO never got to monopoly (despite vigorous effort). If you're interested, Email me about he history of Shell and BP, or read chapters 5 & 6 of"The Prize" by Yergin.
Thanks for the good post.
Re:No, think about it... (Score:2)
i don't want my 1 megabit/s of high-priority stuff to have to get plunked out of the router *after* the other 199 megabits that came before it are routed.
If you truely have adequate bandwidth (including your router's capacity), it'll all go out at the same time. QOS only comes into play when you have 200Mbps but you're trying to send out 201. That's when the 1 might have to wait for the other 200.
QOS looks good in that respect, especially since you control the router. However, now instead of making a very simple routing decision, the router has to look at the data in the packets and decide when and how to route it. Depending on how QOS is implemented, the router might end up costing more than an extra 1Mbps of capacity.
Really, QOS comes down to a method to allow your ISP to get away with more overcommitment. That could be good or bad, depending on your ISP.
Re:More government regulation, great! (Score:2)
Getting really sick (Score:4)
I look at the situation now and want to throw up; vendors paying for stuff that is already being payed for by someone else, names and numbers being bought in blocks and hoarded, etc. etc., protocols that don't want to work with each other, a government that couldn't even fathom the complexity of the system but wants a chokehold on it. We're so screwed.
I'm moving to Sealand
QoS Bad (Score:2)
As an existing web user I am wary of QoS attacks, especially accessing QoS (probably the higher levels) to get a new nast Variety of DOS attacks.
Personally the whole End to End notion makes much sense to me as it is a viable paradigm (oops) for most internet applications and problems.
I also don't know how Voice over IP through a central server will provide many of the interesting possibilites (I.E. intranet telephones) that a point-to-point approach will.
Re:NAT IPv6 and Security... (Score:3)
Re:More government regulation, great! (Score:3)
Re:More government regulation, great! (Score:4)
Scalability vs Marketing Agendas (Score:2)
I see that there is a scalability issue verse the political agendas. There are obviously advantadges to not being democratic in how you deal with data. Spam, for example, or porn. [depending on taste]
There are also dis-advantadges to not being democratic in dealing with data. Insert speculated advertising spin:
the straw man and obvious target aside, this is also a problem.So do we want to gain a short term advantadge in dealing with scalability, and wind up a solution we would not desire?
The point of QoS (Score:3)
Separating internet traffic into high-jitter and low-jitter classes could easily reduce VoIP jitter by a factor of 10.
Re:More government regulation, great! (Score:3)
Can't you realize that government regulation got rid of the AT&T monopoly?
This is only half true. During telephony's first 1/2 century ( roughty 1875-1920), vigorous competition was the norm. From Brittanica [britannica.com]:
That's right, ATT was facing growing competition, so they had the government declare them a "natural monopoly." In 1984, the government was just trying to undue the mistake it had made 60 years earlier.
IPSec and its limitations (Score:2)
I point out that without end2end authentication and encryption, it is like having a bikini and only wearing the top. They have to understand that the identification/authentiation provided is for the agency router and nothing more (this is still necessary but not sufficient for secure communications). Of course you can run IPSec in authentication mode and then run HTTPS/SSL to support the application authentication and encrytion.
Re:QoS and direct connections (Score:2)
So what are RSVP and DiffServ? It already happened; twice in fact.
That's a hell of a lot of bandwidth (Score:2)
The most bandwidth I can use is limited by ... my input devices (including video camera), and the number of individuals who want to see my content all at one time (strong added).
Remember, more people will want to see what you have than you think. There are hundreds of millions of people on the Internet; unless multicast gets REAL good REAL fast, you're going to have a problem serving even 0.1% of the world's population (6 million simultaneous users).
Remember, there *is* such a thing as too much porn.
Or too many users viewing the same pornogram[?] [everything2.com].
E2E is bad for VoIP. (Score:2)
O.k. putting all of the intelligence into the endpoint is a _bad_ thing. By doing that, in order to offer a service, you have to require that they use a _specific_ platform, and a specific release. All of a sudden, the idea of interoperability goes out the window. We will be back to the situation where the only phones you will be able to use are ones approved by the phone company, complete with them being wired into the wall. :)
I think there is some confusion about the intent of the protocols discussed. SIP/H.323 are signalling protocols. They leave it up to the endpoint to make decisions. MGCP is a _control_ protocol (says so in the name! :) ). MGCP is great at controlling gateways. It is made to do things like control VoIP gateways, allowing thirdparty ISUP systems to manage the circuits and perform the H.323/SIP for them.
Moving intelligence away from the endpoint is _good_ when you are trying to get large port densities. Currently, port densities for VoIP gateways tend to max out at about ~1400 ports (eg Cisco 5800). 1400 ports only gets you four calls/second if they stay on the line for the average 5 minutes. Four calls is nothing, piddly, miniscule. Now imagine having to manage several hundred gateways, each one with it's own configuration just to get the call rate up to an acceptable level. Now you see the reason MGCP exists. Gateways are supposed to be stupid. They route speech. They shouldn't have to think about it, it's hard enough on the DSPs to just convert it to packets. :)
If you pull the intelligence required for basic services away from the endpoint (perhaps through proxies) you can then put the intelligence in a central location, allowing a lower cost solution (since the endpoints can be _very_ stupid), as well as allowing people to purchase more expensive endpoints for additional features. The basic features are still provided by the central server. Features such as call waiting, call forward on busy (voicemail), billing, number portability, voice prompts, announcements, etc. would all be implemented in a server/proxy. If you wanted fancier routing capabilities, you could buy something that gave you additional control, but you aren't required to buy it to get basic services.
Of course going in the other direction is also bad. You want the endpoint to be allowed to make decisions about what is going to happen. So, for communication with the final destination, MGCP is probably a poor choice, and that's why SIP/H.323 exist (my preference is H.323...SIP sucks. :P ).
Saying E2E is good and saying that protocols that exclude E2E (such as MGCP) are bad prevents a whole range of technological solutions from being used. A control protocol is just that, a control protocol. If you don't need (or want) intelligence at an endpoint, why require it?
Jason PollockYawn, QoS (Score:4)
Because nobody is willing to pay for it. Customers of ISP service, given the choice between more bandwidth and priority, always buy more bandwidth with the same dollars. Bandwidth is cheaper and cheaper to provide; priority is expensive. These trends are, if anything, accelerating as DWDM and the like make it ever cheaper to cram more gigabits of traffic onto the same fiber.
Of course bad guys like cable carriers may use QoS to implement CoS (Crappiness of Service) for their less favored customers, but as options increase, customers of such will switch away.
It's like soccer in the US: QoS is the wave of the future - and always will be!
Re:More government regulation, great! (Score:3)
Deregulation of electrical power supply in California is perhaps leading to higher electrical bills in the long run.
Secondly, corporations abuse power. They help their friends and burn their enemies, with the consumer left as the meat in the sandwich. Bureaucracy is bureaucracy, private or public.
Re:Yawn, QoS (Score:2)
Not to mention that bandwidth is a lot easier to advertise...