Forgot your password?
typodupeerror
News

The Fight For End-To-End: Part One 116

Posted by michael
from the in-one-NIC-and-out-the-other dept.
Stanford University held a workshop last Friday - The Policy Implications of End-to-End - covering some of the policy questions cropping up which threaten the end-to-end paradigm that serves today's Internet so well. It was attended by representatives from the FCC, along with technologists, economists, lawyers and others. Here are my notes from the workshop. I'm going to try to skip describing each individual's background and resume, instead substituting a link to a biography page whenever I can. (Part one of two.)

The summary provided by the conference organizers has a brief description of end-to-end:

"The "end-to-end argument" was proposed by network architects Jerome Saltzer, David Reed and David Clark in 1981 as a principle for allocating intelligence within a large scale computer network. It has since become a central principle of the Internet's design. End-to-end [e2e] counsels that "intelligence" in a network should be placed at its ends -- in applications -- while the network itself should remain as simple as is feasible, given the broad range of applications that the network might support."

Another way to view end-to-end might be as a sort of network non-interference policy: all bits are created equal. The problem is that there are substantial economic incentives to treat bits differently, and these incentives are changing the architecture of the Internet in ways which may be detrimental to public values.

The workshop covered a number of areas:

  • Voice over IP
  • Network Security
  • Quality of Service
  • Content Caching
  • Broadband
  • Wireless

Jerome Saltzer started off with a technical overview of the end-to-end argument. In summary: digital technology builds systems of stunning complexity, and the way to manage this complexity is to modularize. For networking, this resulted in the layer model that many slashdot readers are familiar with. He suggested that designers should be wary of putting specific functions in lower layers, since all layers above must deal with that design decision. For a longer explanation, one can always read the original paper. If you've never heard of end-to-end before, I do suggest reading this paper before continuing. It's short.


First, Scott Bradner described two competing architectures for voice-over-IP protocols: one which employs central servers to direct and manage calls (the Media Gateway Control model, or Megaco), and one which puts most of the intelligence in the end-points, with the phones/computers originating the calls (the Session Initiation Protocol, or SIP). One important difference: SIP phones can use a central server to direct calls, but Megaco phones have no capability to act independently. Building a great deal of intelligence into the central servers is less end-to-end-compliant than building it into phones at the edges of the network.

One member of the audience pointed out that Federal law requires companies to build wiretapping capabilities into phone switches and wireless network equipment, and wondered how that would be implemented if the phones initiated the connections themselves (SIP). Traditional wiretapping is predicated upon the idea that there is a central server which all communications pass through. The panel candidly replied that when no central server is used and encryption is employed, wiretapping is difficult. One audience member pointed out that wiretapping at centralized switches is not the most effective way to do it, anyway -- since switches can be routed around and communications can be encrypted, the only truly effective way to wiretap would be to build tapping capabilities all the way at the edge of the network -- the phone itself. While some of the audience laughed, I think most of the participants also realized the dark undertones of this suggestion.

Next the discussion turned to innovation. In one model, the central servers would be controlled by companies with a vested interest in managing them conservatively, suppressing competition, etc. In the other, individuals would be able to create/control their own phones on the perimeter of the network, and the only barrier to innovation would be finding someone else to adopt your improvement as well so that the two of you could communicate. In the first model, innovations which benefited the company would be the only ones permitted. In the second one, any innovation which benefited the end-user would be possible.

Finally the discussion moved to a rarely thought about side effect of voice over IP. Universal service -- phone service to (nearly) every resident of the United States -- is funded through access charges on your phone bill. In effect, people in cheap-to-service areas are subsidizing those in expensive-to-service areas, ranging from the badlands of Nevada to wilderness areas of Alaska. From a societal point of view, ubiquitous access to telephones has been a great boon, but providing it requires a societal commitment -- otherwise people living outside of major population centers might never have phone service. Suppose now that traditional telephony is replaced by voice over IP, and no central servers are involved -- there would be no easy way to collect the access charges which subsidize outlying areas. While lowering such taxes may have widespread appeal, completely abandoning the commitment to universal service would be a great loss to society.


The next focus was network security. Firewalls are probably the most obvious breaks in the end-to-end paradigm -- after all, these devices' sole purpose is to stand in the way of network connections, and decide which are permitted and which are not. Participants brought up (but thankfully, quickly moved past) the true-but-useless point that if all operating systems were secured properly, there would be no need for firewalls.

Hans Kruse pointed out that if security must be implemented at the end anyway -- as it must if any incoming traffic is permitted through the firewall -- then there's no reason to do it at the center as well. David Clark put forth the useful distinction between mandatory and discretionary access controls -- mandatory controls being ones put into place by someone else, discretionary ones put into place by you. Discretionary controls do not violate end-to-end, but mandatory ones generally do. Michael Kleeman noted that the reasons firewalls are put into place include the desire to control the actions of users inside the firewall as often as the desire to control access from outside.

Doug Van Houweling spoke regarding Network Address Translation (NAT). NAT allows two networks to be joined together, and is typically used to join a network of machines with non-routable IP addresses to the global internet. NAT is an outgrowth of the limited availability of IPv4 addresses, but is also employed in some cases as a poor man's security measure. Generally, Houweling described NAT as an affront to end-to-end, because any application which requires transparency of addresses breaks, making end-to-end encryption impossible. Added to which, applications sometimes transmit data in the TCP/IP headers which NAT alters. The group noted that NAT can be eliminated simply by putting more addresses into circulation. Later in the workshop, Andrew McLaughlin talked about the address allocation process for IPv6 and said that it is shaping up to be much better than that for IPv4.


The workshop moved on next to Quality of Service. QoS in this case covers a wide range of proposals (and a few working implementations) for selectively speeding up or slowing down network traffic -- a sort of nice for network data flows. The "benign" use of QoS is to ensure that traffic which is strongly time-sensitive like videoconferencing or telephony gets priority over the download of NT Service Pack 16. There are less-benign uses: Cisco's 1999 White Paper which encouraged cable Internet operators to use Cisco's QoS features to speed up access to proprietary (read: profitable) content while slowing down content from competitors was the red flag in the QoS realm, raising concerns about the role of ISPs in traffic delivery and abuses by telecom carriers which are also content providers.

This segment started with an overview of QoS. There are several ways to implement QoS on a network. The simplest is to build a network with a capacity great enough to never be maxed out; if the network has sufficient bandwidth, there's no need to worry about QoS in the first place. There are costs, though, to maintain sufficient excess capacity on the network. This is called "adequate provisioning" if it is your preferred method of managing traffic, or "over-provisioning" if you prefer one of the other QoS approaches. The other ways under consideration are an integrated service architecture and a differentiated service architecture. The former would monitor and track each individual data flow -- the call you place to your mother in Singapore could be treated differently from the call you place to your grandmother in Kracow. The latter would only allow differentiation between classes of services -- all videoconferencing would be treated similarly, for example. Of the three, adequate provisioning is fully end-to-end while DiffServ is less so, and IntServ is highly non-compliant.

Jerome Saltzer (from the audience) made the point that no QoS technique provides real guarantees of service, and any technique except having plenty of excess bandwidth available violates the principles of end-to-end. He emphasized that people should be aware of the trade-offs.

Jamie Love mentioned not only the Cisco white paper but pointed out that this situation lent itself to behavior like that which has landed Microsoft in hot water -- using one's control of a particular system to speed up one's own content and impede competitors' from flowing. A member of the audience countered QoS would allow companies to create different levels of service -- pay more for fast access, less for slow access -- and that this was a good thing.

There were two distinct classes of problems identified. The first is similar to the distinction among methods for carrying voice over IP: the companies that control the QoS-enabled servers get to control who gets to innovate in QoS-related areas. The second, related problem is that of carriers using QoS features to promote their own content. The second problem has traditionally been solved by requiring a separation of carriage and content -- keeping the owner of the lines and the provider of content over those lines separate. The current FCC and FTC are not enforcing that traditional check against monopolization of content in telecommunications; thus it's likely that unless governmental policies change, AOL/Time Warner will be a position to promote its own content through control of the cable Internet services it owns.

Doug Van Houweling then spoke and noted that the Internet2 project is taking a very strong stance promoting QoS, because that stance is seen as necessary to promote investment in Internet2 architecture.

An audience member spoke up and suggested that the best regulatory course would be regulation with a light touch -- regulation could provide the minimum necessary controls to provide really necessary QoS while disallowing abusive uses. At this point Deborah Lathen asked the $64,000 question: how would the FCC make this fine regulatory distinction? No one had a good answer to that question.


In Part two tomorrow: transparent caching, broadband and wireless access, and capitalism.

This discussion has been archived. No new comments can be posted.

The Fight For End-To-End: Part One

Comments Filter:
  • Good point! It's not the ISP networks responsible for servers getting /.ed, for example - it's the lack of power on the server side. It's been quite a while since ISP networks have been the culprit for any major network failures - even the famous Victoria's Secret debacle was primarily a server issue.
  • Interesting to hear a name attached to this.

    Recently, security lists (like Bugtraq [securityfocus.com]) have been getting more and more traffic about "attacks" that turn out to be one proprietary outfit or another pumping out packets to "map" the Net. (Screw the admin who is trying to figure out what the hell they're up to; some of them wouldn't tell, even when asked directly.)

    How do you get this sort of information in an e2e environment? Shouldn't there be a less wasteful way to determine this sort of thing?

  • Not to mention that bandwidth is a lot easier to advertise...

    True enough! People buying ISP service can understand things like bandwidth (of course), reliability/uptime, and even peering quality much better than complex priority protocols that require administration. So the marketers focus on that, and QoS withers on the vine.


  • Paraphrased: There is no such thing as enough bandwidth.

    That is incorrect. The most bandwidth I can use is limited by my disk subsystem, CPU, my input devices (including video camera), and the number of individuals who want to see my content all at one time.

    Remember, there *is* such a thing as too much porn.

  • If you're not billed for it, if you have to wait an hour for your e-mail to arrive, but your full-screen 3-D surround-sound streaming videophone is instantaneous, guess which one your wife will use to give you the grocery list.
  • It's all well and good to recognize that the benefits of this e2e paradigm will probably outweigh the negatives, but it's obvious that in this new age of technologies it's not always the best man that wins. By the time someone gets around to Building a Better Gnutella, the judicial branch will surely have caught up with it. If the government can't regulate it, it doesn't stand much of a chance.
  • by kali (32955) on Wednesday December 06, 2000 @11:39AM (#577223)
    That's right, ATT was facing growing competition, so they had the government declare them a "natural monopoly." In 1984, the government was just trying to undue the mistake it had made 60 years earlier.

    This has to be one of the most asinine conclusions I've ever seen. Did you even read the rest of the Britannica entry? Specifically, the part you left out in the middle of your quote. I assume you did, since it tells a very different story than the selective quote you provided.

    For those who are too lazy to read for themselves, here's the part tylerh left out:

    Vail was brought back into the company as president in 1907, and from then until his retirement in 1919 he molded AT&T into virtually the organization that lasted until 1984. Vail set about trying to achieve a monopoly for AT&T over the American telecommunications industry. He consolidated the Bell associated companies into state and regional organizations, acquired many previously independent companies, and achieved control over Western Union in 1910.

    In other words, AT&T used standard laissez faire capitalist techniques to achieve their monopoly. But what about the Graham-Willis Act of 1921? It didn't create the monopoly; ATT had done that themselves. Do a google search on it, and you'll find that (essentially) all it did was exempt telcos from the Sherman Antitrust Act.

    So what's the moral of the story? Aside from the fact that tylerh has no problem using selective quoting to deceive the lazy, what we see is that ATT basically bought a law that gave them (temporary) immunity from antitrust prosecution. The gov't didn't create ATT, or do anything to create it's monopoly. They simply refused to use federal law to restrain ATT in 1921. To imply that ATT used a gov't granted monopoly to put down their "growing competition" is sheer intellectual dishonesty.

  • The next focus was network security. Firewalls are probably the most obvious breaks in the end-to-end paradigm -- after all, these devices' sole purpose is to stand in the way of network connections, and decide which are permitted and which are not. Participants brought up (but thankfully, quickly moved past) the true-but-useless point that if all operating systems were secured properly, there would be no need for firewalls.

    Not true. Without an IP Masquerading firewall, I'd have to have a separate Internet IP for every networked device on my home lan. This is expensive and stupid. NAT is a good thing. If a machine doesn't offer a service, why does it need to be on the raw internet? Even if it does, I still might not want that service available to the whole world. Easier to set that up once on a firewall than on every single box I have to manage, wouldn't you agree?

    Would you really want the ability to program your VCR available to the entire world? Appliances with web GUI's are coming, and firewalling them off is a good thing. Why waste routable addresses on the 'net if you don't have to? Why encrypt and authenticate things (VCR) when you don't have to?

  • Deregulation of electrical power supply in California is perhaps leading to higher electrical bills in the long run.

    Odd, its not having that affect in PA...
  • Okay, Jon, so how much are you willing to pay for it? That's the key question.
  • I've thought before that the only way you could really provide any kind of QOS that made sense, was to have an edpoint of the network mark packets with differing levels of priority, and then have routers and gateways that decided how the traffic should be handled based on the markings from that device. Thus, a user could mark an iphone call as being much more important than web or ftp traffic, and get a smooth call while downloading. A set top box could indicate that the download of the TV stream was much more vital than the chat traffic that went with it.

    If the user marked all traffic as high, it would simply have the same result as marking it all low or medium - the total availiabe bandwidth would simply be spread evenly for the applications using it.
  • What you want is not carrier-controlled QoS, but rather the ability to control the priority of the various sockets on your own host. After all YOU know exactly what data is important to you; your ISP doesn't have a clue.
  • Allocate a fixed amount of bandwidth to each user. It sucks in that you can't get a little extra, but that's a relatively small price to pay for that control.

    Theoretically, by doing so you could simply add up the bandwidth given to your endpoints and purchase that as your upstream connection. That provider could do the same thing, et cetera.
  • I'm not sure that I agree with your reasoning, although it does appear that the original comment was referring to South Dakota.

    Taking your example, you can say "I went to the mall" and it means the mall in your town. If you say "I went to the Mall", then it means you went to that monster one in Minnesota (or, depending on context, maybe the big parklike area in downtown Washington, DC). So even though the capitalized, proper version of the noun means one specific place, the non-capitalized but specific use of the term means something like "the closest example of a mall".

    So if there were bad lands in your state, you could say "We went through the badlands" and not be talking about South Dakota. You could even say "We went through the badlands of Nevada", if Nevada had any badlands of course. It's only when you say "We went through the Badlands of Nevada" that you are definitely incorrect.

    Yes, I don't want to get back to work, why do you ask?

  • The group noted that NAT can be eliminated simply by putting more addresses into circulation.

    Ick! not quite. NAT also lets users squeeze several machines through one IPaddr from stingy ISPs. ISPs would have to be giving away multiple IP addresses to users before NAT disappears. It's possible the huge IPv6 space will trigger the giveaway, but I don't see it as a given.

  • corporations abuse power.

    And where does this power come from? In many cases (I won't go so far as to say all cases) it comes from Government, Inc. The DMCA is an obvious example.

    Government regulation of things like power and cable TV also seems to always mean a Government granted monopoly, leaving one company exclusive access to a bunch of consumers to pull money from, until they are wealthy enough to bully small competitors, even if their official monopoly is repealed.

    Unfortunately, since this [the monopoly-caused imbalance of power] has already happened, getting Government inc. out of businesses they shouldn't have meddled with in the first place without leaving a nasty power imbalance between the former monopoly and the small potential competitors is going to be really tricky business...


    A vote for the lesser of two evils is still a vote for Evil.
  • I work as an administrator for an e-commerce company which provides a variety of commerce services including software downloads. Our quality of service is very dependent on quality of bandwith. We have discussed QoS as a way of ensuring that employee usages such as Napster can be allowed without negatively impacting our business-critical network applications. We segment this as much as possible inside our own system (production servers on separate networks from users' workstations, etc), but eventually all the traffic is moving through the same set of pipes out onto the Internet. That's one place where QoS is an advantage--when you want to protect your primary bandwidth usage, without actually restricting other activities outright.

    Another good application of this is backups over a network. You don't want backups (which are a pig for bandwidth, but not time-sensitive) having a negative impact on your production network apps, but at the same time segmenting backups onto a separate network means a lot of wasted resources. QoS is a decent solution for this, particularly on small LANs and systems where you can't afford to have a separate network for your backups.

    I think most users' primary objection to this is that good applications of the technology will get lost alongside evil applications of it--your Internet access is restricted to 300 bytes/second because the CEO or the resident BOFH has QoS'ed everybody else down to nothing to protect their own Internet FPS games. But really the problem here is bad sysadmins and bosses, not bad technology.

    The DoS-via-QoS question is an interesting one, though. It's a kink that will have to be worked out before QoS can really be viable, at least in environments where your Internet service is mission-critical.

  • You are basically making the ultimate end-to-end argument -- that is, the physical network into your house and the transmission of data over it should be simple and robust enough to allow higher-level protocols and applications to do whatever they want; be it send video, data, phone conversations, or whatever. If this were the case, various providers could all adapt their high-level applications to use the same low-level infrastructure (i.e. one single cable running into your house could provide TV, radio, Internet, and phone services).

    This isn't exactly a useful comment, of course, but I just thought it was interesting - uber-e2e.
  • Every user on the Napster Network connects to the same central server. However, there are other central servers [napigator.com] that run the Napster protocol (and allow formats other than MPEG Layer 3 Audio; use it for mirroring the Linux kernel tarballs [kernel.org]?), and you can run your own [sourceforge.net] on a nix box or winbox. The lawyers may shut the lawyers shut the Napster Network down, but the success of one big red H [bigredh.com] shows that the game of whack-a-mole [8m.com] is a surprisingly weak form of resistance. Resistance is futile.
  • ATT was facing growing competition, so they had the government declare them a "natural monopoly." In 1984, the government was just trying to undue the mistake it had made 60 years earlier.

    The gub'mint didn't create the Windows "monopoly". It was a natural effect of MASSIVE NETWORK EFFECTS. Windows obviously (still) has competition.

    The phone system is a prime candidate for even stronger NETWORKS EFFECTS. Imagine telco Foo has 51% market share and telco Bar has 49% market share and their systems are islands. Eventually most people will switch to the telco that allows them to connect with more people. It's not necessarily a quick, clean process, though.


  • your arguement might hold up for DNS, but not routing. when people talk about dumb vs. inteligent networks, do you think they're talking about the wires? it's routers, switches and other network equipment that have that "inteligence" or not. and they clearly have some - like the routing info. the e2e model doesn't truely exist unless you've got either direct controll over your switch or a direct connection to every endpoint. note i'm advocating neither; i'm just pointing out that's what's required for a truely e2e network.
    2^32 = plenty. More than enough.

    2^128 = loads. Way more than enough.
    ah, good old "proof by lack of imagination". think about it. if every device (where device is some set of my toaster, TV, socks, refrigerator, etc.) has an IP address, and is connected to the network, that leads to way more infrastructure, too. more nodes for management. it's a worse-than-linear increase. but given that, the numbers are still pretty generous. bringing us to your assertion:
    The problems with ipv4 do not involve the number of nodes, it is a lack of management and arguably managability of the address space an org is allocated.
    to which i agree entirely. another reason e2e doesn't make sense. i should be able to design my network any way i like, you should be able to design yours any way you like. it should then be up to us to provide for inter-connectivity. it's ironic that, in the internet culture dominated by talk about de-centrilization of power and no single "root", ICANN and IANA still define what pretty much everyone can and can't do. to say nothing of the DNS mess. those I* folks have way to much to say about how i run my network.
    I'm sure you're experiences with NAT all involve you home or campus network.
    you are mistaken. unless you take a really, really broad definition of "campus". try dealing with such things for multinational corporations. ones that used to be the telephone company. i'm well aware of the issues surrounding NAT today. but i believe that's because they're trying to work within an e2e network, rather than being able to admit e2e isn't always the way to go.
  • > in the 90-95% range, and even Standard Oil
    > never got that far.

    i dont believe the above statement is true. standard oil consistently maintained 90% of marketshare, i think. and the only reason standard oil at the height of their power never got more than that was entirely for pr reasons. he couldve easily crushed them- among numerous other advantages, he negotiated nasty deals with the railroads that not only gave him huge discounts but actually had the railroads pay him every time they shipped oil for someone else.

    rockefeller thought that a small amount of token "competition" would assuage public opinion and make it harder for him to be prosecuted (which was true for a while, but ultimately didnt save him). he intentionally kept prices at a level high enough to allow his few remaining competitors to survive. there are recorded instances where he told his subordinates to back off from a potential kill- once he completely dominated the market, he felt no need to exterminate those few left.

    so its not like he wasnt as powerful as at&t or microsoft- he simply never fully exercised his control because he was afraid of a public backlash, and didnt think he needed more than 90% anyway. (the lawsuit was actually spurred in part by the actions of his successor, who was considerably more ruthless in the market and less able to keep up moral appearances).

    unc_
  • I think you're confusing addressing issues with access issues. You can firewall off any devices you want whether you use NAT or not, but with NAT you incur all of the penalties of having to muck with the headers of every packet. Also, when you use global addresses, it gives you the option of creating extranets without having to drastically re-address devices.

    Once IPv6 opens up the address space, I think whatever remaining legitimate reasons exist for using NAT will quickly melt away.

  • power deregulation is a very bad example, as it is a completely atypical industry.

    the problem with electricity in the usa is that it has a history of being very, VERY heavily regulated with extremely restrictive price controls, is very sensitive to political/pulic opinion but at the same time requires massive outlays in capital expenditures.

    its a recipe for complete disaster. there are two problems with power in the us:

    1. power generation- nobody wants a power plant in their neighborhood. theyre too worried about nukes, air pollution, or loss of their whitewater rafting. it is very difficult (both in time and money) to build a new facility. political/public backlashes against building new generators has created artificial scarcity despite massive improvements in technology. hence, america does not have enough power-generating capacity to meet its future needs- prices will go up, no doubt about that. they need to in order to compensate a company for all the shit they have to go through to open up a new plant.

    2. the power grid- everyone's plants are connected together, so there is a shared responsibility for maintaining the power grid. quite naturally, no single company wants to pay to repair/increase power grid capacity, because theres no way in the system to charge in terms of where electrons are flowing through the network. and gov't isnt too eager to foot the bill either, so the grid is falling apart. the lack of accountability for power grid maintenance is why electricity should never have been deregulated.

    the problems in ca are actually more related to the power grid. theyre afraid theyll blow the entire system by trying to push too much power through it, which would of course black out most of california and probably a significant part of the west coast.

    pick a different example for corporate abuse of power. the power companies are, ironically, relatively powerless because of their situation.

    unc_
  • 2^32 = plenty. More than enough.

    2^128 = loads. Way more than enough.

    My comment about the raw numbers involved in ipv4 vs ipv6 are intended to show that the raw numbers are pretty meaningless. ipv6 makes provisions for site-local addresses and ipv4 compat address space. I'm assuming that most ip-enabled bake-ovens will go in site-local, maybe with a public facing control system.

    ...

    your arguement might hold up for DNS, but not routing. when people talk about dumb vs. inteligent networks, do you think they're talking about the wires? it's routers, switches and other network equipment that have that "inteligence" or not. and they clearly have some - like the routing info.

    When routers and switches are moving packets they are dumb, route lookup could be handled by a spring-driven mechanical switch. Populating the routing table is a distinct function from forwarding a packet. Calulating a spanning-tree is not frame switching. These functions can be performed on systems other than the router or switch doing the forwarding.

    ...

    to which i agree entirely. another reason e2e doesn't make sense. i should be able to design my network any way i like, you should be able to design yours any way you like. it should then be up to us to provide for inter-connectivity.

    You can set up your network in any way you see fit. Just follow the standards and no one gets hurt. Routers and switches mangling packets and lying to the other end of the connection will often break those standards. Troubleshooting such problems will make you look like a stud and teach you to curse like a sailor, but there is nothing else to recomend them.

    When those standards are broken by the an intermediate system deciding to snoop higher layers, then there is a fault at that system. It is not the duty of designers of the widget control protocol to learn how to interoperate with every router, switch, or firewall out there.

    End-to-End requirements are not some holy thing that ip geeks love because some standards body says to. People argue for it because it works. e2e purity is not the point, but gratuitous incompatability often result when it is broken without extreme care.

    --

  • You miss the point - another poster noted that the way to handle the problem you describe is to provide a fixed level of bandwith to each device or household, the priority would only affect the traffic within your current usable range of bandwidth (you might get less or more) - in effect the oversubscribed model but on a smaller scale.

    Thus, as I said it doesn't matter how you fiddle with your priorities as you are only affecting your OWN traffic, not that of others! If you want to do voice over IP then you better make sure you've paid for the minimum bandwith to provide that service.
  • That's exactly the kind of unwritten assumption that I had in mind when I was writing the original comment! Thanks for making that explcit.
  • If Joe Q. Example decides to live out in the middle of the Nevada desert where there are no utilities, how, exactly, does society benefit by paying to give him a phone hookup? Does someone antisocial enough to want to live far, far away from society in general really benefit that society by having a phone connection to it?

    Now that's a remarkably farsighted view for the techno futurists at /dot.

    Now imagine this:
    What if Joe Q. Example wants to live in a rural agricultural community far out in the Nebraska plains and doesn't want to go Amish or to live in tech central, but to be a part of a mainstream free society that is free from the 'convenience' of global integration and techno modernism? In a capitalist society, few rural communities would have more than a handfull of phone lines or internet connections, and those would probably be in a central location like the county seat library.

    I know of a remote rural community in AZ near the Sonora border where Maryjane is cheap from 'international entrepreneurs', organic vegetables are a viable industry, open space and mountain views are easy to come by, and land is cheap. It's just about 40 miles from this close knit community to a major city, too. But nobody moves there, because there is no easy access to subsidized phone lines or publicly subsidized water works. Furthermore, the highway is long and roundabout and takes hours to travel because it's not as heavily subsidized as an interstate.

    It's a wonderful place and it would die promptly along with its close knit, open, unique, and worldly culture if 'universal service' and the other destructive forces of socialist globalism made their grubby way in there.

    Land would soon be too expensive for homesteading immigrants and native children would be forced to move to the city to earn a living. Rich yuppie second homes would start crowding the landscape and the colonial town center would be redone. The two famous writers who live in the area would escape from scrutiny by living alone or finding another community where they could live as a part of a community and not a celebrity.

    But that would have to be outside the USA, as the socialist menace of 'universal service' and various other leveling schemes carry us down into fascism.

  • The badlands are in South Dakota, not Nevada.
  • I recall attending a talk by Nathan Myrvold a few years ago in which he tried to guess at how much bandwidth would be "enough". I don't recall the exact numbers, but it was based on an estimate of the total bandwidth of the human sensory system, then delivering some multiple of that to every user.
  • People replying to my sig annoy me, that's why I change it all the time.

    Well then, don't make it so darn enticing to do so :) (Offtopic, -1)
  • Think about something like voice over IP. With a phone call you are not interested in bandwidth (anyone who has used a modem knows phone bandwidth is not that high), what you are interested in is priority. You don't want to say "Hi, is Bob there?" and wait 10 seconds for the message to get there.
    Basically the streaming of any live data requires priority. And as the net is used for more live-data applications, priority will become more of an issue.
  • If you don't capitalize "badlands", then it is no longer a proper noun, but just a generic term referring to bad lands. Any region can have bad areas of land, so it is quite possible (though I've never been there to see) that there are badlands in Nevada, but definitely no Badlands in Nevada.

  • An issue,that is.

    Why is it an issue?

    Why does regulation even have to come up? I mean...

    If I take my little network, and hook it to yours, in a completely private deal, who is the government to regulate waht we can and cannot send? At what point then, as this grows, does it suddenly become regulatable?

    If the public at large want regulation, they can pay for their own network.
  • Yes, deregulation is leading to higher power bills, but the root cause is increasing demand for electricity. California is in the midst of an electrical emergency (they lit the state Christmas tree for only 5 minutes!), where the total power used is within 5-7 % of available. Anytime you have scarcity of resources, the price goes up (supply and demand are powerful forces nearly beyond human comprehension). Power companies will respond by increasing capacity, but that's a slow process.
  • I don't know of any router that goes out of it's way to make 'the most brain-dead-possible' queueing strategy.

    You can easily to fair-queuing without priorities.

    But wait you say, I'm more important than that guy -- I don't want fair-queuing, I want me-first-damnit! (even if you don't need it)

    Well, that's were you need priorities, but priorities without costs are useless because just about everybody thinks they are King of the World(tm).

  • You only need priorities when you no longer have the bandwith to serve everyone's needs.

    It's like the freeway, you don't need the stoplights on the ramps unless there is insufficient capacity. Of course, roads and networks suffer the same fate -- building more capacity only attracts more use -- and in general the end users are (unfortunately) buffered from the true costs (which leads to higher use -- how much gas would you buy at $15/gallon?).

    The same goes with many networks -- we have students sending out 100 GB/day from their dorm rooms because they are buffered from the true costs (I paid my dorm bill - I deserve this!)

    But in general, with networking it has been "cheap enough" to simply keep piling on the bandwidth -- perhaps someday Moore will give out and we'll have to work smarter not harder.

  • abandoning the commitment to universal service would be a great loss to society.

    This comment, in my opinion, is a little bit excessive. At the risk of sounding a bit callous, I have to ask (honest question, now, no flamebait intended):

    If Joe Q. Example decides to live out in the middle of the Nevada desert where there are no utilities, how, exactly, does society benefit by paying to give him a phone hookup? Does someone antisocial enough to want to live far, far away from society in general really benefit that society by having a phone connection to it?

    I realize that's a bit of an extreme example, but at least it illustrates the point. Comments? Clarifications? "Shut the heck up you idiot"'s? :-)


    A vote for the lesser of two evils is still a vote for Evil.
  • You've clearly missed the point of QoS. It's primarily about improving latency without overallocating bandwidth. At the moment, the only way to get lower packet latency is to buy more bandwidth - an expensive and often unnecessary proposition. QoS allows bandwidth and latency to be tweaked independently (or at least more independently than they are now). This has definite advantages to a variety of applications including voice-over-IP (VoIP), video conferencing, audio/video streaming, and games. VoIP in particular would benefit - it takes only about 1600 bytes/sec. for a decent VoIP data stream (i.e. one that uses a codec that's both reasonably efficient and reasonably fast). However, the piss-poor latency of the existing internet infrastructure means that any VoIP that takes place these days is likely to be full of delays and drop-outs. QoS could solve this problem. Games would obviously benefit tremendously from QoS. Imagine how happy the gaming community would be if 56K modem users could compete on a level playing field with users sitting on a dedicated T3.

    You seem very focused on the web. You mention the web several times in your post. Keep in mind that there are other reasons to send packets across the internet besides viewing web pages.

    As for VoIP through a central server, one signifcant advantage I can see is anonymity. If connections are made through a central server, it's possible to conceal the IP addresses of the participants from each other. Of course, it could be abused in a number of ways, too, but I just wanted to point out that there was, indeed, at least one clear reason why routing VoIP through a central server would be a good thing.

    And finally, end-to-end is becoming increasingly less viable due to firewalls and NAT. Case in point: Napster. If two Napster users are behind firewalls or NAT routers that prohibit incoming connections, they can't exchange files with each other. This is another argument for routing traffic through central servers, too - it eliminates this problem. With NAT and firewalls becoming more commonplace, and the internet being used for more and more varied applications, I think now is a great time to start looking at alternatives to the end-to-end approach.

  • by tylerh (137246) on Wednesday December 06, 2000 @12:58PM (#577256)

    kali,

    You are correct that my Brittanica link did not fully support my conclusion. Slashdotters are not reknowned for their long attention spans, so I keep my posts brief.

    Since you've read this far, let me take more of your time and tell a fuller story. It's more intersting. Sadly, I don't have supporting links handy

    As Kali points out, ATT reached for monopoly via "standard laissez faire ." If memory serves, they had about a 70-80% market share when, as Kali correctly points out, they bought themselves anti-trust protection and stomped the rest of their market. While we'll never know, many (myself included) doubt that ATT would ever have reached their ultimate 95%+ penetration without government help. Microsoft seems stuck in the 90-95% range, and even Standard Oil never got that far.

    Back on topic, Kali is missing a key fact. Consistent with " standard laissez faire," the ATT "monopoly" was already under attack. Ever wonder why security companies get their bare copper provided to them by the telcos on the cheap? ah, there is a tale. By the 1910s, Private alarm companies had sprung up that were laying their own copper. ATT realized that this was a competing infrastructure, so they cut a deal: we'll give you our copper cheap, you stay out of voice. This was formalized in the rate tariffs as the government set concrete on ATTs "monopoly." And there matters stayed, until DSL pioneers staring ordering bare copper "security alarm" lines for their data networks. Nasty lawsuits/hearings ensued, where this juicy history turned up.

    ... and if mine was the "one of the most asinine conclusions [you]'ve ever seen," kali you are clearly new to slashdot.

  • I see too many limitations in end to end networking in anything but small groups.

    So you don't hold out too much hope for this new-fangled "In-ter-net" thing, do you? Client-server is end-to-end; you have two machines at the end of a dumb pair of copper wires, a dumb piece of fiber, or maybe some dumb RF. Maybe you need routers or switches in between, but those are made just "smart" enough to do their jobs and no smarter.

    This reminds me of an article that was posted to /. a while bag in praise of dumb fiber rather than the "smart networks" that the QoS folks/telcos continue to push. Here's the link. [upenn.edu]

  • kali nailed it on the head...

    Jay (=
  • by Thorin_ (164014)
    <rant>
    Am I the only one out there who hates reading things in pdf format? Can't they post things in simple html (or even complex html)? I would rather read things in some proprietary format such as M$ word than in crappy pdf.
    </rant>

  • Don't forget what AT&T offered in exchange for its monopoly: universal service. That was an exceedingly good deal, all told. It only had to be undone when AT&T got too big for its britches and began to actively suppress competitors while not improving its services; classic behavior of a monopoly.

  • If you said "I went to the great mall", while there are plenty of great malls in the world, the great mall is in Minnesota.

    And if you said we went to the great mall in Nevada, you would be incorrect - as the great mall is in minnesota. While California does have a "the great mall" as well, you could even say "the great mall of California" because it is still a place. When you use 'the' in front of a name of something, you are giving it a specific instance. Your example of the closest example of a * would still serve the case, as the closest example of badlands would probably be in arizona or utah (depending upon your location in nevada) - as nevada doesn't even have any inspecific badlands. All nevada has is deserts really. (Using the badlands definition of Barren land characterized by roughly eroded ridges, peaks, and mesas.)

    I dont want to work either.

  • Who created the AT&T monopoly? Why, the natural laissez-faire forces in a market that is inherently a natural monopoly, that's who. Sure, the government *regulated* the monopoly, but don't kid yourself for a second into thinking that the monopoly would have been nonexistant if the government had let things run their own course. Once Ma Bell had the first phone poles up, the first phone network, nobody else could possibly compete, because it would be impossible to find somewhere to start small. Nobody is going to sign up for some new tiny phone company that doesn't hook up to the rest of the phones in the country yet. Government regulation forces telcos to let other telcos connect to them. Take that away and nobody could ever break into the market once one company has the majority of customers, no matter what the quality of their service might be, or the price, or any of those other factors that companies normally use to compete with each other. I once lived in CT, home of SNET. A phone company that existed since the time of AT&T and never was a part of their network. After AT&T got on a roll the government agreed to their monopoly so long as they give "universal service". The government should have stayed out of it. So what if AT&T naturally became the largest network provider as long as people were free(from government regulation, not market forces) to setup their own phone systems there is not a problem. In a free market there is no guarantee that every single individual will be 100% satisfied, only that they have the opportunity to try to satisfy their needs without government interference. With a government regulated market you guarantee that some people (politically well connected) will be 100% satisfied and the rest of us will be forbidden by law from even trying. I'll take my chances with the market, because I don't feel like bribing politicians.
    Stuart Eichert
  • NAT is an easy way to provide reasonable security, and an easy way to create a network from only one IP address. Hence, the increasing popularity of so-called security appliances. BTW, these are nice XMas gifts for your geeky significant other.

    NAT will not save you if you are running an insecure server, but for insecure clients, it will save you from hackers trying to create a connection to the client. At least that is true if the NAT is properly set up.

  • This is exactly what I would like to see. Say I am on a low-rate DSL hookup, my roommate is doing some remote system administration and I'm downloading some large game demo. His X session is sucking up bandwidth that's making my game download slower...

    Honestly, what's going to keep me from fiddling my connections (at either end) to do ftp-over-voice-over-ip in order to keep my priority high? After all, compressed, encrypted voice looks a lot like compressed, encrypted programs. QoS will only work if all players are trusted. As far as I can see, QoS will only let us download all that pr0n faster (so bring it on! :).

  • You make a very good point. Without government intervention the corporations would rape the land and poison the waters and the air. Of course this would stimulate the economoy because people would be forced to buy bottled water and cnacer is very profitable because it's a chronic and long lasting disease.
  • The only reson they want low jitter is to send faxes. That's right VOIP's tight parameters are there so fax machines don't get confused bytheir warbles being digitised agin. Makes you proud to be in networking, doesn't it.
    QoS is bogus because of Cheshire's Law [stanford.edu] - for every network service there is a corresponding disservice.
    Nothing in networking comes for free, which is a fact many people seem to forget. Any time a network technology offers "guaranteed reliability" and other similar properties, you should ask what it is going to cost you, because it is going to cost you. It may cost you in terms of money, in terms of lower throughput (bandwidth), or in terms of higher delay, but one way or another it is going to cost you something. Nothing comes for free.
  • In montana however it's crippled the economy. Several large companies had to shut down this summer because their power rates went up so high. Oddly enough it was these same companies who were pushing for deregulation thinking the consumer would get screwed and that they would make out OK. They forgot that the consumers don't get to get screwed for another two years when deregulation hits them. Of course since they got screwed forst they will tell the legislature to redo the law and their lapdogs in the legislature will rewrite the law so that only the consumers will get screwed and not the CEOs.
  • No user is really secure, regardless of what OS they use, this we all know, so why does anyone even mention NAT, firewalls etc. I would like to see an effective means of stopping script kiddies (which as a home broadband user are my biggest problem) and see applications which didn't open gapping holes in the OS. Functionality doesn't have to be a security risk.
  • seichert writes:
    If a bunch of bureacratic slimy corporations do not provide what I am looking for I can buy something from a small business.

    How right you are. Indeed in the PC software business (with no gov't regulation) when you've been hit by a macro virus and dangerous executable email attachment, and you've had it with that slimey corporation's lack of security features, and you're frustrated with their continued denial that they have nothing wrong in the design of their product, the free market forces are ready to serve. Just go out any buy one of those many competing (viable) office and email/contact software packages!

  • There is actually a way to do diffserv that is consistent with the end-to-end principle, at least I think it is.

    The idea is that you pay your ISP for a right to send at a certain nominal bandwidth. If you send faster, your packets will be marked with lower priority. At the network's core routers the lower priority packets will be dropped if congestion occurs.

    This idea is called SIMA. You can learn more about it at http://www-nrc.nokia.com/sima/ [nokia.com].

    (I have no relation to them. I just know these guys and really think they've come up with something useful.)
  • The company I currently work for has what I think is a good example of what the article is about. The company is spread out over 5 continents, each having their separate servers for everything, file, application, mail etc. After that all these networks are linked together through a number of connections back and forth and then finally the connection to the internet runs through one final Firewall. This effectively results in thousands of people surfing all being the same IP adress. Guess that explains why the /. polls tell I've already voted 70 times....
  • Again, "properly setup", I use a linux box as a firewall with IP Chains for my rig here in the house, and of course have more computers on my @home network than just the 1 ip I pay for, but for real security, for the average user (anyone running linux or knows that it is not a software package you install in windows is _not_ an average user.) something else has to give.
  • ...but have you seen that Zelerate banner ad? When I checked the main Slashdot page, I was mooned by seven animals? Then I was greeted with the question, "What does your back end look like?"

    Ugh. To be mooned by seven animals on an innocent webpage. I certainly won't be saying nice things to Zelerate anytime soon.

  • Yet again, a bunch of academics calling for more government regulation. Can't they realize that government regulation gave us the AT&T monopoly. If it were not for deregulation and the relatively little regulation of ISPs I would not be able to connect to the Internet in so many ways ( DSL, cable, dial-up, cell phone, pcs cell phone, sprint broadband(wireless), two way satellite, t-1, t-3). People are foolish to think that if you give the government more regulatory power that they won't use it favor their friends and hurt their enemies. Leave the free market alone.
    Stuart Eichert
  • Some ivory tower academics proposed a vague model that wasn't adopted by the industry or the people. Here they attempt to save face by telling you why.
  • The Napster debacle should show us that a centralized architecture is more prone to snooping / regulation. Despite the problems that Gnutella is experiencing I think the concept is much better. No central server to connect through, no single point that can be watched. I doubt Metallica would have been able to track downloads of their songs had Napster been as decentralized as Gnutella, much less been able to ban users.

    The hard part of end to end is changing any design component that is found to be inadequate, like Gnutella's scaling problems. Rather than changing relatively few servers you would need to push updates out to a huge number of smaller components, be they phones or PCs. Regardless, the long term benefits of a decentralized architecture, which is what end to end used to be called I think, probably outweight the negatives.

  • I also feel QoS is an outdated concept. In order for it to have any meaning at all, it would have to be billable (otherwise everybody would send everything at the highest priority). To implement billing based on QoS would add considerably complexity to the already complex job of switching and routing packets, and this would increase latencies enough to counteract any benefit from having the QoS in the first place.
  • Unfortunately, if you just said, "We went through some bad lands of Nevada" that would be correct. But if you said, "We went through the bad lands of Nevada" it sounds as though you went to a place. Such as, I went to the mall. Badlands is also defined as a plural noun, but in a context specific case (the badlands) it is defined as: A heavily eroded arid region of southwest South Dakota and northwest Nebraska. The Badlands National Monument in South Dakota was established in 1939 to protect the area's colorful rock formations and prehistoric fossils. dictionary.com [dictionary.com] is my friend, should be yours too.

    You can't say we went to the badlands, unless you are talking about he ones in South Dakota - other wise you just went to badlands.

  • This, IMO, is the ultimate argument for the overall infrastructure to remain end-to-end (or, in the case of telephony, become end-to-end). As noted in the article in the VoIP discussion, the end-to-end-like architecture still allows for the possibility of use of central control near each end, so those who use that portion of the network can decide which local arhitecture suits their needs.
  • the article posted so far, and from what i can tell from it, the conference itself, assumes that e2e is a good idea. the discussions all revolve around "how do we save e2e" or "what do we do about things that violate e2e" before asking "is e2e a good idea, given the current state of the internet, and its likely evolution?".

    i know i'm risking flaming here, as e2e is something of a holy cow on the internet. but bear with me. there's two ways of looking at it: internet-centric or network design in general. let's look at it first from an internet-centric point of view first.
    the basis of e2e is that all the "inteligence" is pushed out to the edges. well, upon examination, this doesn't hold up in the practice of the internet. please raise your hand if you've got the entire internet routing table on your host. or the full hostname->ip mapping for the internet. no? that's right, that info (inteligence) comes from the network - switches, routers, DNS servers, and the like. a common counter is "DNS isn't an application" but to the network, it is. it's just transferring bits around. so's routing. but the inteligence that makes it work lives in the network, not on the edge (my host).
    now look at it from a general network design, non-internet-centric point of view. from the facilities point of view, the only application the "network" knows about is setting up and tearing down data paths. whether it's a TCP stream, a UDP datagram, ATM connection, or a phone call - the "network" delivers data in defined ways. while QoS can be described as a (potentially problematic) upgrade to this service, NAT is not inherently anything more than a breakdown of the DNS system (although, as currently implemented, it introduces more problems). it's compensating for a failure of IP: the failure to allow real heirarchical address management. IPv6 doesn't solve this, it just pushes the problem farther out into the future. if, as many people have suggested, my toaster, car, telephone, TV, and socks will all have IP addresses one day, even the new addresses provided by IPv6 won't last very long.
    so what to do? on the internet today, DNS first resolves a name into a number. it's the numbers that get things done; it's the numbers that're important. imagine a network where the names were what was important; where i can define, for example, myco/us/ny/ny/10/107 as the 107th terminal on the 10th floor of the NY, NY office of my company. that's a crappy naming system, but the point is that in a world where the names are the key, i can have much better control over the structure of my network and addresses. it also simplifies routing and such, as the area of responsability becomes much clearer.it's a direct violation of the e2e model, but so what.
    oh, and before you say "it can't be done, you need numbers" or whatever, it's already been done. several times. read up on Datakit, a network architecture developed by Bell Labs way back when. it had other problems, like static routing, but it got a lot of things right that the Internet's still fumbling with.
    again from the network design point of view, answer this question. which is easier to upgrade: 1 server or 100,000 clients? imagine if every time AT&T wanted to change your billing rate or add the capacity for three-way calling they had to upgrade your telephone. and everyone else's telephone. by concentrating the inteligence (and, therefore, the complexity) the telephone network has become the most succesfull network in the world, with more endpoints than the Internet, and way more users. concentrating the complexity means less overall work, less chance for something to fail, and lower barriers to innovation.

    that being said, the business model of the Bell system has introduced other barriers to third-party innovation and service, but those are business issues, not technical ones. e2e results in increased overall complexity and additional problems in the name of distributing it away from any central point. sounds like a bad trade to me.
  • This is exactly what I would like to see. Say I am on a low-rate DSL hookup, my roomate is downloading a large game demo, and I am trying to do some remote administration of a box. The download is sucking up all the bandwidth. It would be great if I could mark the ssh packets as a higher priority, this allowing me to do some useful work and a realtively decent speed while the download continues. Linux has a sort of method for doing this (the TC command) but the documentation on how to 'reserve' a certain amount of bandwidth for certain types of packets is pretty bad, and the system doesn't seem to be able to handle dialup speeds either.

    Plus, on a larger scale, it would be nice if routers honored the QoS (or just the ToS) bits for that matter, so interactive apps could win out over batch-type apps.
  • not true... you can very easily have lots of bandwidth but suboptimal service guarantees. that's the whole point of things like diffserv and RSVP...

    let's say my router can handle gigabit traffic, but i'm only using about 200 megabits/s. i don't want my 1 megabit/s of high-priority stuff to have to get plunked out of the router *after* the other 199 megabits that came before it are routed. i want higher priority. that's where it makes the real difference, and can make a serious difference in media-intensive applications.

    jon
  • by StormyMonday (163372) on Wednesday December 06, 2000 @11:17AM (#577283) Homepage
    The biggest advantage of end-to-end networks (also called stupid networks [hyperorg.com]) is their extreme scalability. You can set up a tiny little TCP/IP network at home; it works. We have the TCP/IP Internet; it works. Anything that requires central "intellegence" (read: control) will collapse, sooner or later, as the network expands. Doesn't matter if it is caused by a simple overload, a DoS attack, a terrorist bomb, or a court order. Hit the central server and it goes down.

    The problem is that folks are trying to put services (like voice) that need realtime delivery onto a network that wasn't designed for it. In a straight IP based network, each router only needs resources for the packets that it is currently handling. As soon as a packet gets sent, the router's interest in it ends. A packet can (theoretically!) take any route at all through the network, and it's the endpoint's responsibility to put everything back together.

    Anything else requires additional resources for each connection going through the router. For a backbone router, this is a *lot* of connections. It also means that each connection is "nailed" to a single route through the network. Lose a router and you not only lose the packets that it is storing at the time, but all the connections that it is handling. There are ways of handling this, of course, but the solutions are expensive, in terms of both hardware and bandwidth.

    In my somewhat cynical opinion, what the providers want to do is take the simple "flat rate" model that the Internet is built on and turn it into what Scott Adams calls a "confusopoly", where the customer is never sure what services she is getting or what they're supposed to cost.

    Combine this with the Government's desire (all governments) to monitor and control all communications, and you have the recipe for a real mess.


    --
  • I definately don't want to see ISP's applying QOS to slow down my (to them) non-essential traffic. However, it CAN be used effectively within a companies internal network.

    QOS can also be a good solution for controlling internet access from a corporate network. On a lan with 2000 users, even 10% of them listening to streaming Real-Audio radio stations can put a pretty serious drain on most internet connections. The traditional way of dealing with this is to block Real-Audio, Napster, anything else fun... on the firewall. I vastly prefer the idea of using QOS (when possible) to give priority to business essential traffic, while allowing any left-over bandwidth to be used for "non-essential" internet usage. It's a win-win. Employees get the "perk" of being able to continue to use thier instant messagers, Napster, Real-Audio, etc. But the actual business uses that the internet connection is *for* are not negatively impacted.
  • by JoeBuck (7947) on Wednesday December 06, 2000 @11:27AM (#577285) Homepage

    If you take your little network and hook it to another little network, no one cares. Now, let's suppose that you are a big network; in fact, you are the monopoly cable provider in your town. Now, let's suppose you like Fred Foo for mayor, because he helped you arrange your monopoly, and you hate Bart Barr, because he's trying to get a competing cable franchise established. So you decide to give the Foo campaign high QoS and 300 bps to the Barr campaign. Still no reason to regulate?

    Or, to take a more realistic example: you have a cable monopoly and you own a movie studio. You provide high QoS jitter-free streaming interactive movies to your cable modem customers -- but only movies owned by your studio. Competitors can only use your generic, bursty service, with lots of packet retransmissions and brief outages. Customers can use DSL instead of a cable modem, but the local phone company, which controls all DSL traffic, has made a deal with a different movie studio, so if you want to watch someone else's movies you're still hosed. You can try wireless IP, but there's not enough available bandwidth and too much interference.

    Long ago, the feds made a very wise decision: they forced the major studios to sell their theaters. In the old days if you were in a small town you might only be able to get movies produced by the studio that owned your local theater. Content and distribution need to be kept separate, by law if need be.

  • by PureFiction (10256) on Wednesday December 06, 2000 @11:33AM (#577286)
    There seem to be a few misconceptions about the role of QoS and how it relates to direct connections (i hate the fucking E2E buzzcrap. B2B, P2P, E2E, arggg!! ;)

    QoS is actually used in a large portion of the backbone, but not at the IP layer.

    For example, Sprint uses the same network for their digital voice (PCS, long distance). This is a big SONET backbone tied to OCx ATM networks. From there they branch to voice or data.

    For IP data networks, it flies over the OCx networks just the same as voice, but voice has QoS applied to its virtual connections via ATM AAL2. IP data traffic is usually AAL5 with no QoS.

    Also, many of the backbone IP providers (sprint, UUnet) use QoS/traffic shaping at the entry point for small ISP's to ensure that traffic from big fish like sprint or UUnet or AOL? gets better response.

    You may remember an article about big data providers (UUnet and sprint specifically) giving crappy data service to ISP's and affecting their ability to compete or provide relaible services.

    At any rate, the point of this is that currently QoS is used but internal to the backbone carriers themselves. It is definately nice to have, and allows them to implement all sorts of latency intolerant services like voice and video over their networks which cannot be implemented without QoS.

    It will take a lot of effort to get QoS at the IP layer, as this will entail paying ISP's for a QoS connection, probably ATM, and running IP over that connection, or fundamentally altering the IP protocol to include QoS capabilities similar to those provided by ATM. The latter will not happen ;).
  • by seichert (8292) on Wednesday December 06, 2000 @01:51PM (#577287) Homepage
    Deregulation of electrical power supply in California is perhaps leading to higher electrical bills in the long run. There is not real deregulation of electric power in California. I live here, I read the articles, I buy UPSs for all my computers(god this summer was fun wasn't it). In California the power companies still have to sell everything through a government controlled clearing house. This clearing house can basically regulate the price. In addition it is almost impossible to get permits to build new generating facilities because all of the environmentalists scream about anything to do with power generation(yet want us all to drive electric cars). Basically this de-regulation has been poorly managed by the state and the state has never really let go. Why would the governor want to? By being able to control the electricity he is in a greater position of power.

    Secondly, corporations abuse power. They help their friends and burn their enemies, with the consumer left as the meat in the sandwich. Bureaucracy is bureaucracy, private or public. Absolutely. I in no way disagree with you. The fundamental difference is that by law I do not have to buy a particular company's products. I have to abide by whatever laws the government sets. If a bunch of bureacratic slimy corporations do not provide what I am looking for I can buy something from a small business. The most disgusting things that corporations can do is to basically make themselves a part of the government by lobbying them to influence legislation that interferes with the free market. Remember that corporations and their CEOs are not necessarily interested in the free market.
    Stuart Eichert

  • The group noted that NAT can be eliminated simply by putting more addresses into circulation.

    In theory, yes. In practice though, ISP's will still try to get a few more dollars per month out of you just for changing a few entries in their database. I think the logic goes something like:

    • one of the differences between business and residential users is the demand for IPs
    • businesses are willing to pay more money
    • businesses won't pay more money for the "business DSL" just because it has the word "business" in it, so the we need to distinguish the types of connections, and do it in a way that costs us the least amount of money
    • let's charge per IP, even though it's a pain in the ass to some residential users

    --
  • There are also some tweaks like fair queueing that divide the bandwidth among active users. So if someone doesn't send any packets for a while, other people will be able to use that bandwidth. Thus if you have N users, each one can be guaranteed at least 1/N of the bandwidth, but they might get more if someone else isn't using any.
  • I agree, with one caveat: You imply that there is no advantage to marking all traffic as high priority. This is true, because it is the relative priorities of different data-streams that effects transfer rate, not their absolute prioritites. But, while there is no advantage to marking all traffic as high priority, there is an advantage to any specific user to mark all THEIR traffic as high, as long as it is competing with traffic from other users.

    You would need to in some way change things so that no user can gain an advantage over the others. I don't know how you whould do that...
  • Um, that "vague model" has been in use on the Net for years. Only when ISPs became completely short-sighted did they start to get rid of end-to-end.
  • How do you get this sort of information in an e2e environment?

    Why do you need a map of the Net? One of the basic ideas of the Net is that it works using only local information.
  • The whole might work if you if you just redefine it a bit. Instead of "end-to-end" try "edge-to-edge". The article is right, you want to keep the core of the network simple. BUT, the simplicity doesn't have to extend to the END of the connection...just the edge. Caching is a great example. Caches in the core of the network just doesn't work, there is just too much damn load. Howerver, a cache at the EDGE (say, the ISP level) makes alot of sense. Same for VoIP. Have gateways at the edges, which the end's use to establish communication.
  • If IP addresses become essentially free (as in IPv6), then it's pointless to talk about "wasting" them.

    Likewise, if you can do encryption and authentication for free, then turning it off costs more than just leaving it on.

    You have a good point about administration, although if you rely only on perimeter security then you're screwed if the perimeter is ever breached.
  • Generally, Houweling described NAT as an affront to end-to-end, because any application which requires transparency of addresses breaks, making end-to-end encryption impossible.

    Could someone explain this a little more? I use ssh through NAT gateways all the time. E2e encryption sure seems to work fine. I suspect I'm missing the point...?

    Added to which, applications sometimes transmit data in the TCP/IP headers which NAT alters.

    So what?


    ---
  • that's a remarkably short-sighted view for the networked age. people who don't live in cities are all anti-social? rubbish. ever heard of telecommuting? i like being able to go anywhere and hook up a modem (or just make a phone call). and i can certainly see the apeal for living in neither a city nor a suburb. those more sparsly populated areas are the ones it's more expensive, per person, to get service too. in addition to providing for different people living in different places, it paves the way for growth into new areas by enabling the construction of an infrastructure before market preasures would otherwise require it.
    also, keep in mind that the real issue is "hard to service areas" not "people i think are useless". it's far more difficult to run lines into the appalachian mountains than down the block in Manhattan. yet interesting people or organizations can be found in both places. the diversity ubiquitous telephone access provides society is far and away worth the small cost.
  • These are very limited implementations and installations.

    While they do work, they are not ubiquitous on the internet, and that is what I meant by QoS for IP. A complete and total QoS solution without need for special TCP/IP stacks and applications.
  • Anyone have a cite for any legal decisions which made it illegal for that company (or any other) to provide telephone service?

    Or did the gov't just say they would not prosecute AT&T for anti-trust - in which case it would not be profitable for anyone else to try to compete (which is a different problem than to make any attempt to do so a crime).

  • unc_onnected,

    Thanks for your good post. I basically agree with you. In several crucial markets, particuarly US Domestic distribution, Standard Oil (SO) did achieve effective total monopoly. A couple of caveats: The SpindleTop discovery ended SO's stranglehold on crude production long before the courts broke up SO, and by the turn of the century, the relevant market for petroluem was worldwide, not coutnry by country. And here, SO never got to monopoly (despite vigorous effort). If you're interested, Email me about he history of Shell and BP, or read chapters 5 & 6 of"The Prize" by Yergin.

    Thanks for the good post.

  • i don't want my 1 megabit/s of high-priority stuff to have to get plunked out of the router *after* the other 199 megabits that came before it are routed.

    If you truely have adequate bandwidth (including your router's capacity), it'll all go out at the same time. QOS only comes into play when you have 200Mbps but you're trying to send out 201. That's when the 1 might have to wait for the other 200.

    QOS looks good in that respect, especially since you control the router. However, now instead of making a very simple routing decision, the router has to look at the data in the packets and decide when and how to route it. Depending on how QOS is implemented, the router might end up costing more than an extra 1Mbps of capacity.

    Really, QOS comes down to a method to allow your ISP to get away with more overcommitment. That could be good or bad, depending on your ISP.

  • Yeah, I read that. Too bad it has nothing to do with what I was saying, because it was citing a time during which the country wasn't fully hooked up yet. That's roughly analogous to the home computer market in the early 1980's. If AT&T didn't become the monopoly, someone else would have, whether the government helped it happen or not. Microsoft managed it without government help, and their market has even less "network effect" than a phone system does.
  • by moogla (118134) on Wednesday December 06, 2000 @09:55AM (#577306) Homepage Journal
    The redundancy of these systems is one of the reasons why we have some of these problems. I would love to reduce my access to the outside world to a single coax or fiber connection to my ISP or Telco, and have all services derived from it. If the government would like to tap my phone, I'll leave a port open on my PBX emulator (if there is such a thing).

    I look at the situation now and want to throw up; vendors paying for stuff that is already being payed for by someone else, names and numbers being bought in blocks and hoarded, etc. etc., protocols that don't want to work with each other, a government that couldn't even fathom the complexity of the system but wants a chokehold on it. We're so screwed.

    I'm moving to Sealand

  • by NateTG (93930)
    I can't see any use for QoS that makes sense. If you want better bandwidth or latency you can already pay a premium for them, as many /. readers are aware. QoS doesn't provide any extra availabilty there. The QoS will however provide some corporate control method for content. So that certain sections of the web may be QoS enhanced, and those corporate interests will be able to exert control over the web.
    As an existing web user I am wary of QoS attacks, especially accessing QoS (probably the higher levels) to get a new nast Variety of DOS attacks.
    Personally the whole End to End notion makes much sense to me as it is a viable paradigm (oops) for most internet applications and problems.
    I also don't know how Voice over IP through a central server will provide many of the interesting possibilites (I.E. intranet telephones) that a point-to-point approach will.
  • by SquadBoy (167263) on Wednesday December 06, 2000 @10:00AM (#577308) Homepage Journal
    "No user is really secure, regardless of what OS they use, this we all know, so why does anyone even mention NAT, firewalls etc." Because as a home broadband user something like floppyfw (www.floppyfw.org) is a really good way to get rid of the script kiddies. The answer is yes functionality is a security risk. The only 100% secure computer is encased in cement and at the bottom of the ocean. Security always has been and always will be a tradeoff. A tight little firewall is a really good way of stopping the scitpt kiddies you don't really have anything that interesting but if it is easy they will do it if not they will move on to a easy target these days there are plenty out there. So this really does make sense.
  • by The Cunctator (15267) on Wednesday December 06, 2000 @10:00AM (#577310) Homepage
    Can't you realize that government regulation got rid of the AT&T monopoly? If it were not for government regulation (requiring open access to the phone network), there wouldn't be the competitive market of ISPs which you like so much. People are foolish to think that if you eliminate all regulatory power that companies won't use their unbridled power to favor their friends and hurt their enemies.

  • by DunbarTheInept (764) on Wednesday December 06, 2000 @10:34AM (#577313) Homepage
    Who created the AT&T monopoly? Why, the natural laissez-faire forces in a market that is inherently a natural monopoly, that's who. Sure, the government *regulated* the monopoly, but don't kid yourself for a second into thinking that the monopoly would have been nonexistant if the government had let things run their own course. Once Ma Bell had the first phone poles up, the first phone network, nobody else could possibly compete, because it would be impossible to find somewhere to start small. Nobody is going to sign up for some new tiny phone company that doesn't hook up to the rest of the phones in the country yet. Government regulation forces telcos to let other telcos connect to them. Take that away and nobody could ever break into the market once one company has the majority of customers, no matter what the quality of their service might be, or the price, or any of those other factors that companies normally use to compete with each other.
  • That is my quick take on it.

    I see that there is a scalability issue verse the political agendas. There are obviously advantadges to not being democratic in how you deal with data. Spam, for example, or porn. [depending on taste]

    There are also dis-advantadges to not being democratic in dealing with data. Insert speculated advertising spin:

    "Microsoft's Bits and Bytes are simply superior. They simply just are"
    the straw man and obvious target aside, this is also a problem.

    So do we want to gain a short term advantadge in dealing with scalability, and wind up a solution we would not desire?

  • by cperciva (102828) on Wednesday December 06, 2000 @10:39AM (#577316) Homepage
    The real point of QoS isn't to provide bandwidth to high-priority packets at the expense of low-priority packets, or even to reduce latency. The point of QoS is to reduce jitter, because some applications (VoIP) really hate jitter.

    Separating internet traffic into high-jitter and low-jitter classes could easily reduce VoIP jitter by a factor of 10.
  • by tylerh (137246) on Wednesday December 06, 2000 @10:41AM (#577317)

    Can't you realize that government regulation got rid of the AT&T monopoly?

    This is only half true. During telephony's first 1/2 century ( roughty 1875-1920), vigorous competition was the norm. From Brittanica [britannica.com]:

    After the Bell Company's patent on the telephone expired in 1894, it encountered growing competition from independent phone companies and telephone manufacturers. ... In a commitment first enunciated in 1913 but affirmed by the Graham-Willis Act of 1921, AT&T, as a monopoly," agreed to provide long-distance service to all independent telephone companies. By 1939 AT&T controlled 83 percent of all U.S. telephones and 98 percent of all long-distance telephone lines and manufactured 90 percent of all U.S. phone equipment.

    That's right, ATT was facing growing competition, so they had the government declare them a "natural monopoly." In 1984, the government was just trying to undue the mistake it had made 60 years earlier.

  • I constantly talk to customers about their Internet requirements with respect to authentication and encryption. Many of them are in the Aust Federal Government where a new project, Fedlink, is being deployed. This plans to have IPSec deployed at the bounday routers of each agencies network connection to the Internet.

    I point out that without end2end authentication and encryption, it is like having a bikini and only wearing the top. They have to understand that the identification/authentiation provided is for the agency router and nothing more (this is still necessary but not sufficient for secure communications). Of course you can run IPSec in authentication mode and then run HTTPS/SSL to support the application authentication and encrytion.

  • ...fundamentally altering the IP protocol to include QoS capabilities similar to those provided by ATM. The latter will not happen ;).

    So what are RSVP and DiffServ? It already happened; twice in fact.
  • The most bandwidth I can use is limited by ... my input devices (including video camera), and the number of individuals who want to see my content all at one time (strong added).

    Remember, more people will want to see what you have than you think. There are hundreds of millions of people on the Internet; unless multicast gets REAL good REAL fast, you're going to have a problem serving even 0.1% of the world's population (6 million simultaneous users).

    Remember, there *is* such a thing as too much porn.

    Or too many users viewing the same pornogram[?] [everything2.com].

  • O.k. putting all of the intelligence into the endpoint is a _bad_ thing. By doing that, in order to offer a service, you have to require that they use a _specific_ platform, and a specific release. All of a sudden, the idea of interoperability goes out the window. We will be back to the situation where the only phones you will be able to use are ones approved by the phone company, complete with them being wired into the wall. :)

    I think there is some confusion about the intent of the protocols discussed. SIP/H.323 are signalling protocols. They leave it up to the endpoint to make decisions. MGCP is a _control_ protocol (says so in the name! :) ). MGCP is great at controlling gateways. It is made to do things like control VoIP gateways, allowing thirdparty ISUP systems to manage the circuits and perform the H.323/SIP for them.

    Moving intelligence away from the endpoint is _good_ when you are trying to get large port densities. Currently, port densities for VoIP gateways tend to max out at about ~1400 ports (eg Cisco 5800). 1400 ports only gets you four calls/second if they stay on the line for the average 5 minutes. Four calls is nothing, piddly, miniscule. Now imagine having to manage several hundred gateways, each one with it's own configuration just to get the call rate up to an acceptable level. Now you see the reason MGCP exists. Gateways are supposed to be stupid. They route speech. They shouldn't have to think about it, it's hard enough on the DSPs to just convert it to packets. :)

    If you pull the intelligence required for basic services away from the endpoint (perhaps through proxies) you can then put the intelligence in a central location, allowing a lower cost solution (since the endpoints can be _very_ stupid), as well as allowing people to purchase more expensive endpoints for additional features. The basic features are still provided by the central server. Features such as call waiting, call forward on busy (voicemail), billing, number portability, voice prompts, announcements, etc. would all be implemented in a server/proxy. If you wanted fancier routing capabilities, you could buy something that gave you additional control, but you aren't required to buy it to get basic services.

    Of course going in the other direction is also bad. You want the endpoint to be allowed to make decisions about what is going to happen. So, for communication with the final destination, MGCP is probably a poor choice, and that's why SIP/H.323 exist (my preference is H.323...SIP sucks. :P ).

    Saying E2E is good and saying that protocols that exclude E2E (such as MGCP) are bad prevents a whole range of technological solutions from being used. A control protocol is just that, a control protocol. If you don't need (or want) intelligence at an endpoint, why require it?

    Jason Pollock
  • by sulli (195030) on Wednesday December 06, 2000 @10:02AM (#577328) Journal
    The vendors have been pushing QoS for years. Yet nobody uses it. Why?

    Because nobody is willing to pay for it. Customers of ISP service, given the choice between more bandwidth and priority, always buy more bandwidth with the same dollars. Bandwidth is cheaper and cheaper to provide; priority is expensive. These trends are, if anything, accelerating as DWDM and the like make it ever cheaper to cram more gigabits of traffic onto the same fiber.

    Of course bad guys like cable carriers may use QoS to implement CoS (Crappiness of Service) for their less favored customers, but as options increase, customers of such will switch away.

    It's like soccer in the US: QoS is the wave of the future - and always will be!

  • by gammoth (172021) on Wednesday December 06, 2000 @10:04AM (#577330)

    Deregulation of electrical power supply in California is perhaps leading to higher electrical bills in the long run.

    Secondly, corporations abuse power. They help their friends and burn their enemies, with the consumer left as the meat in the sandwich. Bureaucracy is bureaucracy, private or public.

  • Bandwidth is cheaper and cheaper to provide; priority is expensive.

    Not to mention that bandwidth is a lot easier to advertise...

"Consequences, Schmonsequences, as long as I'm rich." -- "Ali Baba Bunny" [1957, Chuck Jones]

Working...