Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
News

The Fight For End-To-End: Part Two 55

Stanford University held a workshop last Friday - The Policy Implications of End-to-End - covering some of the policy questions cropping up which threaten the end-to-end paradigm that serves today's Internet so well. It was attended by representatives from the FCC, along with technologists, economists, lawyers and others. Here are my notes from the workshop. I'm going to try to skip describing each individual's background and resume, instead substituting a link to a biography page whenever I can. (Part two of two - part one ran yesterday.)

The final segment of the morning covered caching. The main issue centered around transparent caching, where users ask for certain content but their request is silently fulfilled by a caching proxy server instead, generally without the user having any way to detect this. The standard concept of caching has the user being presented with the same content she would otherwise have gotten from the requested site, but that need not be true - Singapore, China and Australia have all used transparent caches to censor their citizens. This can also be a security violation (are you really talking to the secure server on stupidpettoys.com, or a proxy in between? Most users won't notice the difference.). Ann Brick noted a subsidiary issue - big commercial players have the ability to pay for their sites to be cached, while individuals do not. Similar to the QoS issue, this might be used to discriminate between paying, fast, commercial sites, and sites owned by individuals or even competitors.

David Clark made the insightful observation that dollars spent on caching don't go to general network improvements -- one small piece of the network is improved by caches, but the same money spent improving the whole network could improve it for everyone. Timothy Denton concluded this segment with the characterization of transparent caching as the difference between "form follows function" and "function follows form": the mere presence of caching and the ability to interfere with content delivery in the middle of the network destroys end-to-end and creates opportunities for mischief.


In the afternoon, there were two larger sessions covering broadband and wireless Internet access. In both areas, the companies controlling these access methods have strong motivations to violate end-to-end principles.

Jerry Duvall led the broadband discussion. He presented a rather fascinating economists' view of the situation -- an economists' world being solely concerned with customers, producers and markets. Laws are necessary to enable markets -- contract law, commercial law, fraud law, and so on are needed in order for markets to function. He summoned up the ghost of Adam Smith with a brief review of capitalism: producers always conspire against the public to get more profits from them, only competition keeps them in check. Marketing, lock-in, monopolization, and predatory pricing are always used by producers. He denied that end-to-end represented any sort of a perfect competitive market, however, suggesting that customer wants cause problems -- in some cases, customers actually want bundles from a single provider, and may actually prefer non-end-to-end Internet access. From an economist's point of view, end-to-end is only a means to an end. The end in this case is creating value for the customer. If that involves end-to-end Internet access, fine. If it doesn't, still fine. The value to the customer is paramount, engineering elegance is secondary.

Duvall also suggested that many observers have a naive view of regulation. With regard to the debate over open access to cable systems, he stated that there was no easy way for regulators to "come in and fix it." Regulation implies overcoming the resistance of entrenched players, and in the case of open access to cable systems, AT&T and other cable giants have proven adept at fighting lawsuits in support of their ability to keep their systems closed.

As we've seen previously, there was discussion of the reasons why end-to-end can be violated: sometimes customers want it, but (probably more often) the wants of companies are the driving force. Duvall suggested the external value of end-to-end in fostering competition and democratic values isn't adequately valued in most considerations of the economics of broadband. That is, the cost of violating end-to-end is spread out among many users of the network, but the benefits from that action accrue mainly to individual companies -- in economic parlance, this is called externalizing costs.

Another panelist emphasized the democratic value of open systems, a recurring topic in Lessig's writings. There was a bit more discussion of bundling-as-an-aid-for-novice-users vs. bundling-as-a-way-to-lock-in-customers. Jerome Saltzer reiterated the time-tested solution for monopoly problems: separate the content from the content-carriers. Deborah Lathen, acting perhaps as devil's advocate, asked why the builder of the pipe shouldn't be allowed to monopolize it. Duvall noted that no matter what the FCC might do to regulate cable carriers, that economic theory doesn't hold much chance for relief -- any time there's a monopoly (over the cable pipe), the monopolist is going to be able to extract monopoly rents, one way or another. If regulation affects a certain aspects of the business, the monopolist will find some other way to leverage the monopoly for greater profits. The only sure remedy is eliminating the monopoly.

Further audience discussion raised the idea that the concept of "an ISP" is a odd sort of legacy brought about by the necessity to have an intermediary between the telephone network and the TCP/IP network. In the future, the concept of an ISP may change radically. A question was asked: what benefit does the public get by allowing the cable companies to monopolize access? There were no good answers.

Mark Laubach gave a good overview of the architecture of cable Internet access, referring to the DOCSIS standard, which wasn't designed with open access in mind. Laubach stated that "basic IP dialtone" -- that is, a simple TCP/IP Internet connection without frills or bundled services -- should be a consumer right, which should apply to every broadband service regardless of delivery method: cable, DSL, wireless or satellite services.

Peter Huber summarized the open-access debate as it affected phone companies. The phone companies had a 1Mhz twisted pair of copper strands that they swore up and down couldn't be shared. They were ordered to share it, and now are doing so: local and long-distance competition, shared data/voice over that tiny line, co-location at central offices, etc. Now the cable companies have a 750mhz copper wire that they claim is "impossible" to share. Huber emphasized that whatever the regulations, cable and phone companies should be treated equally. Currently there are disjointed regulations, which (depending on your viewpoint) either unduly hamper phone companies or leave cable companies unfairly unrestricted.

Further discussion brought out the case of Stockholm, Sweden. Stockholm and certain other cities have taken on the job of laying fiber-optic cable as a municipal service, similar to sewer service or water or roads. Since the municipality built the pipe to the home, there is no issue of a company attempting to monopolize the pipe, and any company which wants to offer Internet service over the pipe may do so. As a result, Stockholm residents are getting extremely fast access speeds at prices less than U.S. residents pay for cable Internet access, and customers don't have to worry about the cable monopoly steadily reducing their upstream speeds, or banning servers, or whatever other crackdown U.S. cable providers have thought of most recently. The panel then debated whether (and how) it would make sense to move the U.S. to that sort of municipal model. A panelist threw out the figure that true open access to cable pipes might require a choice of 400 ISPs. An audience member suggested that as things are currently going in the U.S., there might be a choice of five ISPs at most, hand-picked by the cable provider.

David Clark added that whatever solution is proposed, it must be an ongoing process -- since cable Internet access is certainly not going to be the final stage of bandwidth development. Finally the broadband session closed with a pithy statement that, despite claims to the contrary, content is not king -- there is, and always has been, more money in individuals talking to each other than in one-way content distribution. The question that remains is how to convince broadband providers that there is more money to be made in selling large quantities of low-profit services rather than small quantities of more profitable ones.


The day concluded with a session about wireless Internet access. Unsurprisingly, WAP was the first topic to come up: a closed, end-to-end-unfriendly, expensive protocol that is all but deceased in the market, yet still actively promoted by companies that hope to benefit from controlling wireless Internet access.

Karl Auerbach had an insightful comment about why to use plain vanilla TCP/IP instead of a bespoke wireless protocol. Similar to the argument raised by Bruce Schneier and others that using a proven crypto algorithm makes sense because there are a lot of bad protocol writers in the world, Auerbach posited that freely available TCP/IP stacks have had the bugs beaten out of them, but the average proprietary protocol hasn't. The topic shifted to the location information that is now required to be built in to mobile phones. The panel discussed the control issues inherent in different network architectures: location information could be built into the phone, and controlled by the user, or it could be built into the cell towers, and controlled by the phone company (or law enforcement, or advertisers). It looks like the second architecture will be the one that is deployed.

Yochai Benkler brought up the issue of spread spectrum changing the rules for FCC frequency allocation -- more communications may shift to frequencies where the FCC does not require licenses to broadcast. Dewayne Hendricks gave a lengthy and interesting description of how amateur radio is currently being used in a manner similar to the venerable Fidonet to pass packet data over the short-wave frequencies via a store-and-forward system. The interesting part is that Amateur Packet Radio has been around for 15 years or so. Hendricks' concept was that the first truly free network would be one composed of independent wireless spread-spectrum devices creating an ad hoc network which could not be censored or controlled by any entity whatsoever. One audience member quipped that disruptive technologies always appear to incumbents as toys.

Hendricks noted some other wireless WANs, such as one in the San Francisco Bay area using Breezecom wireless cards and antennae. (Coincidentally, Salon did a story on wireless WANs just a few days ago.) Dale Hatfield noted that Hendricks' network could be created today using licensed spectrum, and noted that the greatest danger is incumbent spectrum-holders pushing regulations which protect their investments by making it difficult for the FCC to open up or use sections of the spectrum for these innovative uses.


Towards the end, one member of the audience (and I do apologize for not catching who it was), pulled everything together by noting the convergence between end-to-end as a technological issue, open access as an economic issue, and democracy and public debate as a political issue. The idea of eliminating "gatekeepers" on the internet is important for a great many reasons, whether you look at it as a technological issue of promoting progress and innovation, or as an economic issue of fostering competition and preventing monopolies from abusing their power, or as an issue of promoting free and unrestrained speech on the communications media of the 21st century. This is certainly one of the most important issues facing the country today, but relatively few people know anything -- even a smidgeon -- about it, or at most they've read a few news reports about the AOL/Time Warner merger. I'm glad to see such a diverse and intelligent group working on the issues, and if they don't yet have all the answers, it's only because they want to get it right.

This discussion has been archived. No new comments can be posted.

The Fight For End-To-End: Part Two

Comments Filter:
  • by Anonymous Coward
    Now, is this something people would consider to be NOT pure end-to-end IP?

    Yes, because you're blocking particular IP packets based on some part of the data portion (in particular, the part of the data that constitutes the TCP header). I'd quote from the paper linked from the previous article, but it doesn't seem to be viewable...

    Personally, I wouldn't want to pay $XX more for the 'premium' service just to be able to recieve personal mail separate from debian-user separate from bugtraq etc., when I have a box perfectly capable of recieving and sending the mail itself on any normal connection.

  • On the other hand, would you trust a company like AOL to build and maintain municipal roads? ("I'm sorry sir, we only support recent-model AOL-certified vehicles. We have a deal this month on...") At least the local government is democratic and not out to profit off you.
  • The beauty of fiber is that you can run distinct services at the same time, without interference (cable can too, but the available bandwidth of fiber is amazing).

    s/amazing/infinite/

    Yes kids, that's right, for all intents and purposes, the bandwidth of one of those little strands of glass is infinite (sources [google.com]). The switching and repeating gear isn't fast enough to support that of course (yet, but Moore's Law continues to fix that).

    This is another argument for municipalities to control the laying of fibre; the glass will last at least as long as the power lines, water and sewerage pipes that they have also got to lay to your home. It's only the tech at the ends that may need to change over the period of several lifetimes.

    To you whacky Yanks who don't trust your governments, I still don't think you have to -- we're talking local governments here, not federal, for starters, and as for tapping the things, just end-to-end (e2e! Hey ma! I used a buzzword!) encrypt your VOIP drug deals. No problem.

    Cheers, Robert.
  • You need to use CIPE, or at least SSH.
  • As mentioned in the article, and in some of yesterday's posts, QoS can be used for Bad Things. However, Voice over IP is one example of a Good Thing that pretty much requires VoIP. How is this resolved?

    It seems to me that QoS packets are more expensive to the network than regular packets, so why not charge for them that way? The end user can buy, for instance, a package with 1Mb/s total bandwidth, including 64kb/s realtime bandwidth. If a small business wants several VoIP lines, they pay for more realtime bandwidth accordingly. That gives you enough room for VoIP clients to do their thing, and streaming video clients can use stream buffering and cheaper, jittery packets just like they do now.

    Maybe I'm wrong, but I don't think this would be a difficult thing to implement for IP6 ISPs, would it?
  • End-to-End is network-layer services, not applications. What do you mean by "Push the intelligence to the edges of the network"? Intermediary vendors will argue that that's precisely what they're doing.
  • Why should TCP/IP access be free (as in speech) ?

    It certainly shouldn't be limited. There should be no government interference that states, "You can't have simple IP-only connectivity, unless you also sign up for the Moron Channel". OTOH, commercial operators should be perfectly free to offer IP access "with strings attached", maybe access and a free WebTeeVee to anyone who does sign up to the Moron Channel.

    The existence of bundled deals does not mandate the removal of all unbundled deals. We should guard against this occurring, rather than shouting down anything and everything that looks the least commercial.

  • It depends on what you mean by end-to-end and whether the consumer, publisher, or both have authorized the processing of the messages.

    You can have an end-to-end TCP connection between an HTTP user agent and an intermediary. The issue is how that end-to-end connection is established (was the flow intercepted and redirected), and whether the transaction is semantically transparent.
  • The cable is there to deliver packets, and the cable company should be able to charge money for it. The cable companies ought not to be allowed to have any financial relationship to the companies which generate the packets, nor discriminate between them

    I agree that this would pretty much prevent the sort of abuse I've seen. COX could charge, say, ARNet a fee to use their cable lines for broadband, but could not themselves provide internet service.

    I think that COX would scream bloody murder though :)

    Here in the Corporate States of America I doubt that we'd be able to get that passed. I think that a government regulatory comission for this sort of thing is a much worse idea than simply banning cable owners from being ISPs, but I think that the cable owners would prefer that plan. Unfortunately, I also think that due to the system of legalized bribery we have in the USA the corporations that own the physical network will prevent anything from loosening their chokehold.

  • On the other hand, would you trust a company like AOL to build and maintain municipal roads? ("I'm sorry sir, we only support recent-model AOL-certified vehicles. We have a deal this month on...") At least the local government is democratic and not out to profit off you.

    Agree mostly. The interstate highway system in the USA works just fine, and a government owned internet infrastructure would work pretty well too.

    My biggest concern is the possibility for censorship, and police state problems. Right now the FBI has to ask all of the ISPs and backbone sites to please let them put Carnivore in. In a government owned system they wouldn't need to ask, and we'd probably never hear about it.

    Perhaps a non-profit monopoly would work better than the government?

    No easy answers, unfortunately.

  • I feel a need to comment on basically every part of this article. Here we go.

    Stanford University held a workshop last Friday - The Policy Implications of End-to-End - covering some of the policy questions cropping up which threaten the end-to-end paradigm that serves today's Internet so well.

    A workshop implies that something gets done, and all I see here is a lot of talk. This should have been called a roundtable. This is just a nit, and the rest of my comment will not be quite so picky.

    I'm already confused about how "End-to-End" is being used here. The only people I'm aware of who have a truly end to end solution are AT They are a portion of the backbone, they provide service down to smaller providers, and they are a cable-to-internet provider. THEY are end-to-end. Buying internet from many other companies which go back to the backbone level actually results in your internet service being carried and supported by a third party ISP.

    I also think that an end-to-end "paradigm" (I thought this wasn't a Katz post?) doesn't serve anyone other than the company controlling it. Oh sure, at first it looks like a good idea; If Microsoft built your PC, and wrote the OS, and all the applications, it would all work perfectly, right? Everything would interoperate. Then they could gouge you whatever they wanted for support and upgrades because you and the rest of the world are locked in to their model. Thank you, but no.

    The final segment of the morning covered caching. The main issue centered around transparent caching, where users ask for certain content but their request is silently fulfilled by a caching proxy server instead, generally without the user having any way to detect this.

    This might be a good one for Ask Slashdot; How can you detect when you are a victim of transparent caching? I can see how you might want to go about it in a vague sort of manner, but it would pretty much require some sort of sniffing since you have to go through port 80 in order to trip the ACL. Is there any package which will do this in an automated fashion, or does anyone know what sort of things you'd want to look for in sniffer logs to determine if this is happening to you?

    Ann Brick noted a subsidiary issue - big commercial players have the ability to pay for their sites to be cached, while individuals do not. Similar to the QoS issue, this might be used to discriminate between paying, fast, commercial sites, and sites owned by individuals or even competitors.

    Uh, yeah. It's called capitalism. The bands which sign up with a major record label get promotion. The companies with lots of money put up billboards and pay for spots in the middle of the superbowl. If I make more money than you, I can better afford fancy cheese for my crackers. This is a shock?

    David Clark made the insightful observation that dollars spent on caching don't go to general network improvements -- one small piece of the network is improved by caches, but the same money spent improving the whole network could improve it for everyone.

    Again, this is not a communism. The internet was controlled by the government once, and it wasn't really keeping up. Now, for the most part, the private interests maintaining the sucker have been upgrading it as they see a need. Dollars speak, folks. That's just the way it is. Perhaps one day when we've invented nanotech no one will have to work and everyone will have access to super high speed data connections, but not today.

    Timothy Denton concluded this segment with the characterization of transparent caching as the difference between "form follows function" and "function follows form": the mere presence of caching and the ability to interfere with content delivery in the middle of the network destroys end-to-end and creates opportunities for mischief.

    This has always been a danger on the internet. If you are in between two points, you can screw with people. That's just a fact of life. Man-in-the-middle attacks, faked DNS, routing alterations, blocking traffic, sniffing traffic; It's all been done. The only way to avoid such things is to use public key cryptography on all data; And your key exchange has to be private for maximum security. This means not using the internet to exchange keys. Oh well, so much for that one.

    Jerry Duvall led the broadband discussion. He presented a rather fascinating economists' view of the situation -- an economists' world being solely concerned with customers, producers and markets. Laws are necessary to enable markets -- contract law, commercial law, fraud law, and so on are needed in order for markets to function.

    Finally, someone with their head on straight. This guy sounds brilliant. I'd pay good money to see a transcript (or even have an mp3 or similar) of his part of the speech. Then again, maybe it's already on the 'net someplace and I just don't know where it is. The footnote version of his speech in the article I'm commenting on is quite succint. This is the way the whole world works.

    Another panelist emphasized the democratic value of open systems, a recurring topic in Lessig's writings. There was a bit more discussion of bundling-as-an-aid-for-novice-users vs. bundling-as-a-way-to-lock-in-customers.

    As AOL could tell you, the beauty of bundling is that it accomplishes both at once. This may not be beautiful from a consumer standpoint, or only halfway attractive. But the facts are that companies want to keep their userbase, and that bundling works.

    Deborah Lathen, acting perhaps as devil's advocate, asked why the builder of the pipe shouldn't be allowed to monopolize it.

    Ayn Rand would be proud.

    Duvall noted that no matter what the FCC might do to regulate cable carriers, that economic theory doesn't hold much chance for relief -- any time there's a monopoly (over the cable pipe), the monopolist is going to be able to extract monopoly rents, one way or another. If regulation affects a certain aspects of the business, the monopolist will find some other way to leverage the monopoly for greater profits. The only sure remedy is eliminating the monopoly.

    Absolutely true. Free and Open Markets cannot exist inside of a monopoly. If we care more about openness than capitalism (do we?) then certainly we cannot allow an end-to-end monopoly.

    Further audience discussion raised the idea that the concept of "an ISP" is a odd sort of legacy brought about by the necessity to have an intermediary between the telephone network and the TCP/IP network. In the future, the concept of an ISP may change radically. A question was asked: what benefit does the public get by allowing the cable companies to monopolize access? There were no good answers.

    I've got a good answer: nothing. Remember when we were talking about monopolies? In theory, because everything is from the same provider, it should all interoperate perfectly. In reality, of course, this is never the case. The different departments will treat each other like different companies, and spite and malice will rule the day.

    Mark Laubach gave a good overview of the architecture of cable Internet access, referring to the DOCSIS standard, which wasn't designed with open access in mind. Laubach stated that "basic IP dialtone" -- that is, a simple TCP/IP Internet connection without frills or bundled services -- should be a consumer right, which should apply to every broadband service regardless of delivery method: cable, DSL, wireless or satellite services.

    This is just FULL of stuff I would like to poke holes in. First of all, I don't know what you mean when you say DOCSIS wasn't designed with open access in mind. What does "open access" mean at the moment? DOCSIS is just another physical spec for how a certain type of cable modem (and cable modem head end) should act. I used to work as a Lab Admin for Cisco in Santa Cruz and we did QA and dev for DOCSIS cable modem products. We were not the only ones with DOCSIS head ends; 3com and Lucent (at least) also had solutions. They also have modems; So does Cisco, although production of their CM is fairly unimportant to their buisness strategy, since they're partnered with something like seven other companies which will be making CMs or STBs (Set-Top Boxes) with embedded CMs.

    BTW, DOCSIS is really spiffy. You get ~40Mb/sec downstream on your very own frequency not shared with anyone else. You do, however, share ~10Mb/sec upstream with everyone else on your segment, which could be as many as 25,000 people, though that's highly unlikely in the field.

    As for "basic IP dialtone" being a basic consumer right, this is pure poppycock. You get what you pay for. IMO a "basic IP dialtone" would include a static IP, and look at what a pain it is to get static these days, what with the artificial shortage in address space due to legacy Class A addresses and poor partitioning. Of course, we will be experiencing a real address space shortage soon enough, so I'm not too worried about taking away those legacy IPv4 addresses from people who got them back in the day.

    The big problem with this summary is the lack of definitions. What is "basic IP dialtone"? (Note that every time I say this, I sneer. Just wanted to set a mood.) It almost seems like it would imply a basic connection type. Dialtone doesn't require any configuration on the line, so does this include DHCP? Or even BOOTP, I guess, but DHCP is superior, so let's just say that. So a static IP, DHCP, and an ethernet jack in your wall, perhaps? Or do you want to standardise on cable? DOCSIS would be dandy there, since you'll be able to go to Fry's and pick up a Sony CM (based on a Cisco Reference Design.) Uh-oh, but DOCSIS is apparently not designed with open access in mind. I don't know what's so closed about a CM which configures itself via DHCP regardless of what ISP you plug it into, but I guess that's just not open enough for some people.

    Now the cable companies have a 750mhz copper wire that they claim is "impossible" to share.

    Impossible? No. Really goddamned hard? Definitely. Let's take a look at what's involved in just setting up a DOCSIS cable modem environment without any Cable TV services existing concurrently. First of all, you have the head end. Connected to this you have some number of attenuators, and then you run into another box called an up-converter which converts your particular frequency information from your device into the proper "channel" frequency for cable "broadcast". Just to get all of this tuned and troubleshot requires a multi-thousand-dollar cable analyzer. We used one from HP with a very sturdy plastic case. I believe it ran about US$90K. It was hardly the most expensive model.

    A number of these UP converters get attenuated (again) and go into a condenser (A backwards splitter) and they go to the repeaters, which then go to your house.

    It's worth considering that under DOCSIS a lot more bandwidth will be used (what with giving every consumer their own little chunk of the spectrum to download on) so it will be far more difficult to give out bandwidth. Under legacy CM systems it would be far simpler.

    Further discussion brought out the case of Stockholm, Sweden. Stockholm and certain other cities have taken on the job of laying fiber-optic cable as a municipal service, similar to sewer service or water or roads. Since the municipality built the pipe to the home, there is no issue of a company attempting to monopolize the pipe, and any company which wants to offer Internet service over the pipe may do so.

    Cable is a bike path. Fiber is an interstate. Copper, of course, is a deer trail. Some wonderful things have been done with copper, but by and large, we're pushing the limits. It would be wonderful if the US would run fiber to every door, or at least to the curb, but it doesn't seem all that likely to me. There's a lot of US in which to run fiber. Besides, the companies will do it eventually, and then the government will belatedly try to regulate it, and more or less succeed, because fiber has that holy grail of "virtual unlimited" bandwidth; You can put many different signals into the fiber on various frequencies and get them all out again at the other end, and since they're in the wavelengths for light and not radio (near 10^14 Hz rather than between 3KHz and 300GHz*) you can put a lot more data in each frequency band.

    The panel then debated whether (and how) it would make sense to move the U.S. to that sort of municipal model. A panelist threw out the figure that true open access to cable pipes might require a choice of 400 ISPs. An audience member suggested that as things are currently going in the U.S., there might be a choice of five ISPs at most, hand-picked by the cable provider.

    Knowing the U.S., we'd probably end up giving a certain band to each provider, so we couldn't support more than a few with regulation anyway.

    Finally the broadband session closed with a pithy statement that, despite claims to the contrary, content is not king -- there is, and always has been, more money in individuals talking to each other than in one-way content distribution.

    Thank god they didn't actually come out and say that it would always be that way. It would be interesting to compare the value of selling Cable Television to the value of selling someone internet access, when examined as a Profit-Loss chart.

    I don't have much to say about the wireless stuff, but:

    Yochai Benkler brought up the issue of spread spectrum changing the rules for FCC frequency allocation -- more communications may shift to frequencies where the FCC does not require licenses to broadcast.

    In theory, this is a good thing. In practice, spread spectrum devices are going to trample each other, potentially on purpose, especially when you think about some of those ass-end taiwanese firms which make their money by reverse engineering ASICs and such and then selling them as their own product, or the korean fabs which will personally sell someone else's product; These are companies already shown to be short on morality.

    One audience member quipped that disruptive technologies always appear to incumbents as toys.

    Probably the most insightful thing said all day, and they're only referred to as "one audience member".

    Finally, the closing paragraph of the article (in which the author apologizes for not getting the name of another audience member, thank you very much) talks about how it's important to eliminate "gatekeepers" on the internet, but we lost that right when we privatized the internet. We chose between Big Brother controlling the internet (though what with carnivore and its probable relatives, that didn't quite end, did it?) and the soulless corporations (an entity designed to spread blame) controlling our most pure access to information which people may not want us to see. I don't have an answer for this one, sorry. And here you thought I knew everything!


    * Source: United States Frequency Allocations (The Radio Spectrum) chart, U.S. Department of Commerce, National Telecommunications and Information Administration, Office of Spectrum Management, March 1996. For sale by the U.S. Government Printing Office. Superintendent of Documents, Mail Stop: SSOP, Washington, DC, 20402-9328. Or you can order it for $7 plus shipping from This Link [gpo.gov]. Slashdot will probably mangle the hell out of the URL; The Stock Number is 003-000-00652-2 and the U.S. Government Online Bookstore lives at https://orders.access.gpo.gov/su_docs/sale/index.h tml [gpo.gov].

  • Monopolies are generally bad things. We should think long and hard before we create new ones, particularly ones with explicit legislative sanction.

    Yes, monopolies are bad. But what can we do? It seems like somebody has to have a monopoly on broadband. The only alternative would be to give right of way to every "Tom, Dick, or Harry" who wanted to bury cable to your house.

    Currently the only people in the US who own right of way are the municipalities, the electric company, the telephone company, the cable company, and the railroads. Other than the municipalities, all of them are government sanctioned monopolies.

    Given those choices, I would vote for the municipalities to control broadband. I think the analogy to the interstate system is a good one. The Feds subsidize, but the state and cities pay for it and oversee it. Few places have perfect roads, but almost all places are drivable.

    BTW, there is motivation for cities to keep their internet infrastructure up to date. Think about how much competition there is to get large businesses to move to a particular city.

  • > Can you imagine the level of control this would give the government over the development of IPV7?

    Yes, but their work will all be undermined by Eiri Masami, when he programs the Schumann Resonance into the Version 7 Protocol, and becomes the God of the Wired.

    Props to Lain!
  • There's a lot of commercial interest right now in making intermediaries "smarter" so that they can process messages through them.

    That's the exact opposite of what "e2e" is. "End to End" means that the intermediaries should be as dumb as possible. Push the intelligence to the edges of the network.

  • I'm already confused about how "End-to-End" is being used here. The only people I'm aware of who have a truly end to end solution are AT They are a portion of the backbone, they provide service down to smaller providers, and they are a cable-to-internet provider. THEY are end-to-end.

    Yes you are confused. I must admit that when I first read part 1 of this topic that I interpreted "end to end" the same way. But that's not what it means. Details can be found here. [stanford.edu]

  • But on the third hand, the very fact that the government-run services are not out to make a profit can severely hamper their ability to react to changing market demands. Look at the kinds of insanely expensive and substandard phone services that government-run phone systems in other countries often provide.

    And short of privatizing at some later date there's no possibility of competition if the government screws everything up. Yes, you can vote out the politicians who are nominally in charge, but politicians don't administer networks or lay fiber. The entrenched beuracracies who would end up in actual control tend to operate with flagrant disregard for customer service, and they're incredibly resistant to change.

    Monopolies are generally bad things. We should think long and hard before we create new ones, particularly ones with explicit legislative sanction.

  • Unfortunately, I bet that the courts will eventually take a more pragmatic view since filtering is useful for security

    Precisely--and by attempting to do this, they may well have created liability for themselves when they fail. The possiblity that someone will sue their ISP when their box gets "h@x0r3d" (because the ISP filtered in the name of "security," and didn't do a good enough job) is our best hope for avoiding this kind of limitation on the use of the connections we've paid for.

  • We are also considering HTTP caching on port 80

    If you use transparent caching or proxying and don't tell the user, you're committing fraud, IMHO.

    However, if you disclose it, that's fine. Of course, if there's another ISP that doesn't force me through a transparent proxy . . . unless you're charging me less . . . or your performance is a lot better . . . you get the idea. I think as long as you're honest with the users, that stuff's OK. It's when ISP's try things like that (e.g. upstream capping, transparent proxies, blocked ports) and don't tell the users up front that's bogus.

  • "a simple TCP/IP Internet connection without frills or bundled services -- should be a consumer right"
    India is currently talking about making a certain amount of bandwith (256k I believe...) a constitutional right for every citizen. I think this illustrates one of the basic problems of making network access a "right".
    Only governments can declare and enforce "rights". In order to declare and enforce rights, governments must make laws. If the laws are too technologically explicit, they are prone to becoming obsolete almost instantly. If the laws are too flexible, then companies can effectively weasel their way around them or even use them to their advantage. Just witness corporate use of the early (and vague) anti-trust laws against labor unions.
    Trying to legislate any technology as a "right" is trying to hit a moving target with a clunky and innacurate weapon. We need to think of a better solution.
  • It was local governments that brought us the Bell Telephone monopoly. We are just getting free of that mess. (Of course, now we're in a new mess based on the FCC's idea of what competition means (forced competition), but I prefer it.) Local lines deregulation happened much later than other phone deregulation because local governments would not change.

    The idea that a local government monopoly ISP would be more responsive than a commercial one in a competitive situation, is causing both my eyebrows and my sides to hurt. Should I laugh or cry?

    By the way, government control of the streets themselves is also a hindrance here. If someone *owned* the street, and was free to set rates and charge phone and cable companies for installation and rent, then the right equilibrium between redundant cable and cable sharing would develop. As it is, the current rules interfere with the needed incentives.

    The idea that redundant cable is "wasteful," is like the old USSR's gloating about the "waste" of three or four redundant US car companies. Anyway, where I live, we don't seem to be overwhelmed with cable and phone alternatives yet.
  • The choice between government and AOL is a false one. The point is to have a choice between competing companies. Even in roads. But networks aren't like roads, they can be built in parallel. And the idea that "the government isn't out to profit off you" is pretty naive.
  • I don't think anyone is saying you have a right to free access, it's more along the lines of a right to purchase access without being discriminated against because you are using it in some way that the provider doesn't like. Kind of like the way the electric company couldn't cut off my electricity if they found out I was watching porno movies at home and took offense to it. They are not allowed to descriminate against me on that basis. By the same token, Time Warner or AT&T should not be allowed to deny anyone the right to purchase bandwidth from them (preferably at a price that is deemed fair by the government since there's no real competition to keep the prices reasonable). While they may not be allowed to deny you the right to purchase bandwidth, they can charge additional fees (again, as deemed reasonable by the government) to run lines to outlying areas.

  • The only right you have is to do as you please 'til you harm another. There's no such thing as a right to food, water, electricity, plumbing, healthcare, a job or Internet access.

    It's one thing to have the customer pay for certain costs associated with their personal circumstances (e.g. living in the country), but it's quite another to be able to deny people service altogether, for whatever reason (e.g. they don't want to use the ISP that you approve of).

    OTOH you do have a right to do whatever you want with the bandwidth you buy. This nonsense of disallowing servers must be stopped.

    How do you come to this conclusion? Assuming that your previous statements mean that you believe that broadband providers should not have to offer access to everyone equally, why do you believe they should have to offer access on terms most agreeable to you, rather than the terms they dictate in their EULAs?

  • There should be no government interference that states, "You can't have simple IP-only connectivity, unless you also sign up for the Moron Channel".

    Actually, I think that we're looking for "government interference" to prevent the broadband providers from dictating the very same thing.

  • They're in the business of selling bandwidth. Once I've bought that bandwidth, it's mine to do with as I please.

    I think you are more in agreement than you realize. They are certainly able to decide the terms under which they give you access (or sell, lease, whatever) to a certain amount of bandwidth, since they own that bandwidth. That is one of the problems with the current state of things. Some of us ARE arguing that they should just be in the business of selling bandwidth, with no strings attached. The same kind of regulation that would also state that they cannot deny service to someone. This is to prevent them from using their control over the bandwidth to strike exclusive deals with ISPs and others to the detriment of consumer choice. They can charge a fair price for providing that service (which could include a fee for running cable out in the coutry) and it would work much like the phone system. The key is that they need to be watched very closely, since, as just about any economist will tell you, if they aren't watched, they will tend to screw the consumer every chance they get. That's mainly thanks to the fact that cable companies usually have a monopoly in any given service area. It's rare for there to be any competition, and even where competition exists, there certainly isn't much of it. Without competition to act as a market regulator, we need government regulation.

  • One of the things that I've long advocated is exactly the Stockholm model: you have the Municiple Dept of FatPipes. I find it hillariously wasteful for us to be stringing multiple cable types all around our cities when a single fiber can handled everything we want, and have headroom for anything we think up.

    The beauty of fiber is that you can run distinct services at the same time, without interference (cable can too, but the available bandwidth of fiber is amazing). A broadcast TV can be done over one spectrum range, traditional voice over another, and an IP datastream over another - all that is needed is a "splitter" at each house.

    Now, what I want is strictly this: your local gov't (since it will be most responsive and responsible to local desires) has a complete monopoly over data lines in your city. However, all it does is pass data - it should be legally forbidden to choose who can pass data - the only constant should be a fixed rate schedule for a given service class that will cover maintenance and improvement costs. Thus, ANYBODY should be able to pass voice data over the (truly) public network, if everyone pays 0.01/minute (for instance). You get universal access, and a lot of the nastiness of the content/bandwith contol goes away.

    Two more notes about the massive advantage of doing things this way: it makes for EXTREMELY efficient IP allocation (if the Municipality is gives you an IP, which they probably should), and it is EXTREMELY easy to insure a fair and equitable policy of the bandwidth provider, since your Municiple Dept of Bandwidth is subject to all those wonderful F.O.I. and similar laws which companies are immune. And it's a lot easier to change policies of the Dept of Bandwidth - simply go vote in your next election. Wanna try to get something changed by a company? good luck.

    Oh, and to the poster above who was under the illusion that the FBI has to ASK for permission of the ISP to put in something like Carnivore - NOPE. They have to get a valid court order to do the tap, but once they have one, they can REQUIRE the ISP to install the wiretapping device, and of course the ISP is legally bound not to tell you that your connection is tapped.

    I know, I've done it.

    -Erik

  • I understand where you're coming from, although I know people who run ISPs who haven't had to resort to this sort of thing, but then I don't know your particular situation.

    I have a question about this port-blocking business: what, exactly, does it protect your company from? If a user is using you as an ISP to connect to somebody else's insecure mail server, isn't the problem with that mail server? If the problem is volume of outgoing data that a user is sending, I would think that's something you should be able to control separately.

    I have much less of a problem with something like the MAPS RBL than I do with blanket blocking of ports. MAPS punishes the actual behavior of individuals and companies, whereas you're punishing everybody in advance, even if they don't know it: guilty until proven innocent.

  • I fully understand that, and might well be hunting around like you (if it weren't for the fact that I get the service for free being a consultant there).

    There are tradeoffs. If port 25 is left open, and people can sign up for services and send SPAM, then the quality of our service goes down because we have to spend staff time dealing with the aftermath, and our equipment has to deal with more DoS attacks. Before blocking port 25 we were seeing about 1 spammer a month signing up. After blocking port 25, we couldn't tell if there were any trying to or not (didn't log port 25 attempts).

    There were alternatives. For example we could have done a background investigation on people who sign up for service. But that would not be cost effective given the time involved (normally accounts go up within 15 minutes) and the fact that there really isn't any information about who spammers really are. We chose to block port 25 because it was the least costly choice, and would have the least impact on the vast majority of our customers.

    As for letting people know, it was our policy to answer truthfully if anyone ever asked; no one ever did. The anti-relaying on our servers has caused a few problems for "road warrior" customers, but blocking port 25 on our dynamic dialup never has raised a tech support issue.

    As for the web cache, I do think it would be appropriate to notify customers, and I will bring that up. As for opting out of caching, I think that's going to end up having to be an extra cost item, since doing so means greater use of bandwidth. A third of the cost of business is buying bandwidth.

    Incidentally, I plan to set up a 2nd optional cache server (not transparent, must configure as a proxy) which also blocks most banner ads. It will pull from the main cache (less the banner ads). Customers will have the option to use it if they choose to. Is that fair? I'm sure it's not fair to the advertisers :-)

  • We do make a pure IP available. The cost is increased because our cost to offer it is greater. Those costs include making sure we identify who the subscriber is to be sure we are not signing up someone whom we have previously canceled due to abuse. There is also a real cost involved in static IP, which everyone who does want to run an SMTP server usually wants. This is why it is bundled together (almost everyone who is in one group is in the other). Most of the subscribers who do run their own SMTP servers are businesses, but there are some individuals. And most of them have migrated to DSL as well.

    I entirely agree with you on the principles. But from the perspective of a business offering where we have to balance between controlling costs (including the impacts that reduce the quality of our service) and offering what the vast majority of customers want, this is what we end up with. Pure IP is not what most of our customers care for.

    Basically, both you and I will have to face the fact that what we want as a pure End-to-End IP is a small subset of the telecommunications market demand. Fortunately, the cost of offering it is only marginally higher, so it is practical to offer it as an alternative.

    My understanding of transparent redirected proxy was that it worked with the IP address you used. If it uses the separate Host: header to do an IP address lookup, it's a broken proxy server design. Maybe the redirection is broken for not supplying the IP address. I haven't looked into what they actually did design, but I know it could easily be designed to work right even in your situation. My point is the blame goes to the design they are using, not the fact that they are using transparent proxy at all. Now I need to go check Squid and see if it's designed right ... *sigh*

    In any event, I would sure like "IP purists" (which I count myself as one) to help me out and find solutions to real business problems like these which preserve pure IP. Please tackle the SPAM issue and HTTP bandwidth redundancy issues as the top priority.

  • We curently offer what I would call a limited access service. All dialup access is restricted on port 25. Outgoing connections to port 25 from dialups are limited to reaching our servers only. This only applies to basic dialup access. We also have a premium dialup service which includes static IP and is not blocked at port 25. The purpose of all this is to pro-actively prevent SPAM from originating from our network, especially considering that we don't really know who these people are who sign up for basic accounts. We've had people call and sign up and then send SPAM the very next day. In the 2 years that we've been blocking port 25, we've not been the origination point for SPAM (relaying has been blocked way longer).

    Now, is this something people would consider to be NOT pure end-to-end IP? Considering that the email content is end-to-end, can this be considered to be a valid exception?

    BTW, we also block ports 137-139 and 31337 at the border.

    We are also considering HTTP caching on port 80 since our inbound traffic is well more than double our outbound, and HTTP inbound can account for virtually all of it. I just don't know how much of it is redundant that a cache would service until I put a cache server in place and try it out and see. But a cache server would definitely cost us way less than the increase we would need to pay over the next year just to add more bandwidth.

  • Wow, a basic consumer right? That is an absurd statement, and it is typical of the techno-elite that frequent these conferences. You don't see 'unbundled' newspapers being a basic consumer right, or magazines, or even TV shows...

    But you do see telephone dial tone, electricity, natural gas, running water, and sewers as basic consumer rights, don't you? Why should there be a right to water? Is there a right to Pepsi or Kool-aid? And what about electricity - does everyone 'deserve' to have a generator in their house? My point is that the proposer is essentially suggesting that TCP/IP access become a utility, and that access to TCP/IP should be free (as in free speech).
  • There isn't a right to those things. Ask someone who lives in the country about it sometime. You generally have to pay for a well to be dug, or pipes to be run, or wires to be strung &c. There are some exceptions--I believe the phone company must provide local service, while you pay for only the line to your home from some regional office,--but my point stands.

    And this is only right. The only right you have is to do as you please 'til you harm another. There's no such thing as a right to food, water, electricity, plumbing, healthcare, a job or Internet access.

    OTOH you do have a right to do whatever you want with the bandwidth you buy. This nonsense of disallowing servers must be stopped. @Home delenda est.

  • I'm not against such a regulation. I'm for it. But what I am against is the sloppy use of the word 'right.' If one has a right to unfettered cable access, than no-one can charge you for it; indeed, all the rest of us would have to ante up to send it into Appalachia &c. The point I've tried to make is that rights are a particular philosophical concept. One has a right to life, liberty and property--there is no right to access.

    I agree that in the absence of true competition (not oligopoly, which the megacorps like to claim is competition), the State must step in and remedy things. With true competition it would not really matter if each line provider were tied to an net provider--we could pick-and-choose. But we do not have true competition. I'm not certain that such a thing is really possible--I have a nasty feeling that utilities are a natural monopoly and that nothing can really be done to change that. Thus they fall under the purview of the government, unfortunately.

  • They're in the business of selling bandwidth. Once I've bought that bandwidth, it's mine to do with as I please. That's why I disagree with these ridiculous AUPs from ISPs. If they didn't want me sending and receiving packets, they should not have sold that privilege to me. If I sell you my car you don't cry that you have no transportation.

    My objection to calling connectivity a right is that rights are things which others and the state are duty-bound to guarantee. A right cannot be something which costs anybody else--else it is theft. It hurts no-one if I speak freely, worship the God I choose, carry a gun, smoke a cigar, smoke a joint, enjoy the free use of my property, dress how I wish, use Linux or do any other such thing. It hurts us all plenty if net access is a right--we'll have to pay the bill. It's less a right than those other two non-existent rights, food and clothing.

  • maintainance bullying ... evocative. Yes, gatekeepers do have "too much" control over us all. And even many elected ones don't accept that there's a fundamental need for accountability.

    Now the thing I find amusing is that some folk see this as a Good Thing. Say, the FBI and other organizations which think they should, for some reason, have leverage to control what you do, even if it's just by communicating. (Right wing hate groups, and their left wing censors...) The pen is mightier than the sword. (Or "than the bosom", as someone put it in Police Academy N -- just a bit less sexist? :-)

    That is, thinking that we should be free of this particular set of chains is a very political statement. It couples with media control, freedom of speech, and the increasing irrelevance of physical borders for many of the things that "really matter" in at least the wired parts of society.

    Mark my words: one of the big trends over the next few years is going to be the evolution of technologies that support end-to-end quite nicely, but control those ends to a startlingly invasive degree. Gatekeepers control the passcodes, after all, and when you don't have choices, they have a lot of control over what you can do.

    You ain't seen nothing yet; the holders of power are quite familiar with how to maintain it, by seemingly fair means (that are actually foul in some subtle way).

  • I find it odd that so many posters are willing to hand control of broadband over to the government on the grounds they have only our best interests in mind and will protect us from the evil corporations. This is the same government that has come up with Carnivore, DMCA, and the Clipper Chip, and you want all net traffic to be required to go through them? The IRS and FBI would enthusiatically support this plan, which is why we should not.
  • Yow. I have no hope of responding to the entiretity of this post, so I'll just adress the big error at the front. End-to-End does not refer to the same company providing all of the services along the way. If you read the first article, it has a link to a paper on the definition of End-to-End. Basically it's the principle that if you need certain guarantees about your data stream that are application specific, those guarantees should not be built in the lower levels of the system.

    To use an example from the paper, when a file transfer completes, you want to check that the file got through with all the bits intact. However, in order to do that you need to get a checksum of the file and make sure the file has not been changed. If the application on the other end screwed up and sent an incorrect portion of the file, you need to detect that, and it must be done on the application level. Even if TCP/IP has perfect error checking and recovery, the application must perform an application specific check, and thus there is no point in making TCP/IP guarantee perfect transport.

    The basic principle is "push as much intelligence as is needed at the edges of the network to the edges, and don't bother with making the intermediate network smart." Or even more concisely, "Make everything as simple as possible, but no simpler". (einstein?)

    Walt
  • That's an excellent example of comparing apples to oranges. Internet access is not about delivering certain content, yet you cite three content-based systems as things that are not provided unbundled. The only absurd statements I see are yours, comparing network access to something it isn't, and assuming that being able to buy just network access precludes being able to sell any sort of enhanced network access.

    I agree with the person who compared network access to things like the phone system -- you can buy just local phone access if you want; you do not have to buy caller ID or call forwarding or any of the other extensions that phone companies offer. You do not even need to get long distance on a phone line! Being able to buy that basic phone access is analogous to the consumer right that Laubach was advocating.

    Despite the fact that phone customers do not need to buy the extensions, many phone companies do provide them, and many consumers do buy them. They are a value-add that is compatible with the basic network, and IP is much richer in protocol terms than dialtone *and* much more compatible with other protocols living on the same wire. Any argument that mandating the availability of a basic IP dialtone will prevent innovation is pure FUD.

    As for rewriting laws and IPv7, you vastly underestimate the ability of people to get around unjust laws (through legal or extra-legal means) and the momentum that preserves protocol compatibility. If the laws are unjust, organizations like the EFF and EPIC will fight them in the courts and coders around the world will demonstrate on the network how silly such regulations are. IPv4 and IPv6 are both very compatible with privacy and anonymity services, and the cost to replace them everywhere will prevent regulation requiring transition to any new protocol.
  • Err.. please discriminate betweeen caching and application-level gateways aka intermediaries.

    There's a lot of commercial interest right now in making intermediaries "smarter" so that they can process messages through them.

    Deployment of these devices cannot be stopped technically, and it's doubtful that they'll be stopped legally. What can be done is the establishment of a framework that at least makes the server and/or client aware of the modification to a message, and hopefully give them some degree of control over it.
  • Laubach stated that "basic IP dialtone" -- that is, a simple TCP/IP Internet connection without frills or bundled services -- should be a consumer right

    Wow, a basic consumer right? That is an absurd statement, and it is typical of the techno-elite that frequent these conferences .
    You don't see 'unbundled' newspapers being a basic consumer right, or magazines, or even TV shows. If you want to watch TV 'unfettered', you still have to watch the ads that the network wants you to see.
    In addition to stomping out competitive desire amongst those providing us with bandwidth, this would harm innovation. If I had a cool new technology that I wanted to offer my customers, but it couldn't be adapted to pure, unfettered IP, I would be prevented from offering it over my pipes.
    Would we re-write the laws to include IPV6? Okay, let's say that we do. Can you imagine the level of control this would give the government over the development of IPV7? If the protocol doesn't satisfy their desires for tapping, tracing, tracking, etc. it couldn't be used...
  • Further discussion brought out the case of Stockholm, Sweden. Stockholm and certain other cities have taken on the job of laying fiber-optic cable as a municipal service, similar to sewer service or water or roads. Since the municipality built the pipe to the home, there is no issue of a company attempting to monopolize the pipe, and any company which wants to offer Internet service over the pipe may do so.

    Being the devil's advocate on this issue, I just wanted to bring up the case about Burlington, Vermont. Burlington recently voted for a bond issue to fund the building of fiber optic network in the city. If I remember correctly, the city voted to not only build the fiber, but to also provide access services. And as can be predicted, since the bond vote, the city hasn't even begun to string wire yet. A few questions arose in my mind about this.

    First, do I really want my municipality to be responsible for a fiber network? Given the glacial pace of bureaucratic decision making and budget constraints in most municipalities, it seems quite possible that by the time a municipality BUILDS a network, it could rightly be outdated. Not to mention the idea of maintenance. Or the idea of folks screaming about their taxes being raised to fund these projects.

    Second, what about issues like security and censorship? Think about the trouble AT&T is getting because of the mere availability of adult content on their cable network. Or the issues swirling around Internet access in public schools and libraries. I could just imagine the policy implications....

    I think the notion of transferring the responsiblity of Internet access to the public sector is admirable, and of course we want everyone to have "fair access" to the Net, but I question the viability of public projects like these.

  • incumbent spectrum-holders pushing regulations which protect their investments

    Read as: Corporate $whores$ line the pockets of a corrupt US Government. When did profit become the sole civic will of the US Government?
  • Lets not call it consumer rights but a human right. I reject the notion that I am a 'consumer' - I am a citizen, a person, a brother, husband and a great many things - NONE OF THEM ARE 'CONSUMER'
  • It looks like (in the first few paragraphs of your post, anyway) you are interpreting their use of "end-to-end" in the wrong way. It's not an issue of having a single provider / service from one end of the connection to the other (a la your AT&T example) but rather a design philosophy for placing intelligence in a system. The end-to-end "paradigm" calls for, in effect, putting the intelligence at as high a level as you can get away with. Lower level services should be implemented with just enough intelligence to do their job efficiently.

    Read the Saltzer, et al, paper that was referenced in the first post in this series -- it explains the end-to-end argument well and gives some examples.

    So anyway, an end-to-end argument says nothing about who should own or run which parts of the network. It _does_ argue that people running the IP services shouldn't be doing anything other than moving packets through the network. The modularity of the network stack should be respected, so decisions made at the IP level should not be made using, eg, the contents of the IP packets.

  • The pitch of the article, (as someone else pointed out) was as if certain ideas were the last bastion of all that's good, and these are under attack. Specifically, End-to-end (and its specific current incarnations, like TCP flow control), a single flat IP space, and "Universal Service."

    (It's funny, because I thought everybody agreed the internet was a fantastically lucky lean-to of kludges, in desparate need of some paradigm-shifting soon. Oh, and I thought bandwidth was cheap by now.)

    The article paints the existing internet paradigms as threatened by things people want to do and are doing to networks. Each of these new activities was put in its most paranoid light. (I love the use of the word "nonconformant" meaning "not-e-to-e-correct"!) But some of them (like reliable phone connections! How hard can this be to understand!?) are just sensible applications of networks that people would like. The internet is under attack by reality. I won't address all the paranoias and stressed paradigms separately.

    The internet does have a kind of unified technical-political coolness. John Gilmore said it "routes around censorship". Only here, we're talking about practices that (it is feared by some) *break* the internet. In other words, internet as it is *fails* to deal with these issues. Why? Why couldn't it serve the sensible needs and defang the bad guys, the way it "routes around censorship?" (Answer: it's imperfect.)

    There's a big all-or-nothing, us-or-them attitude in both the writer and (seemingly) the other attendees. Why must it be TCP/IP-style decentralization vs. some imagined centralized architecture? What about variations on decentralization? What about fine-grained market models, for instance?

    What we want is a system that can provide various sorts of services (e.g. bandwidth to some, reliability to others), to a public that is totally willing and able to pay, in a way that's more resilient in the face of bugs, clogs and the various behaviors of regulators, big companies, cops, hackers and clueless sysadmins. A way that makes bad guys less relevant and paranoia less necessary. Like real markets do. Like the internet does *somewhat* already, only with some flaws and vulnerabilities fixed.

    Einstein said everything should be as simple as possible, but not simpler--I don't think pride in the internet's successes should blind us to its oversimplifications and prevent us from redesigning.

    The trouble is that the ideals of the people who retrench ideologically, will be "routed around" by people like Cisco and Sprint.
  • by phil reed ( 626 ) on Thursday December 07, 2000 @08:41AM (#574992) Homepage
    From where I sit, a very telling point:

    The phone companies had a 1Mhz twisted pair of copper strands that they swore up and down couldn't be shared. They were ordered to share it, and now are doing so: local and long-distance competition, shared data/voice over that tiny line, co-location at central offices, etc. Now the cable companies have a 750mhz copper wire that they claim is "impossible" to share.

    I do hope the folks from the FCC who were in attendance make special note of this.


    ...phil

  • by alienmole ( 15522 ) on Thursday December 07, 2000 @05:32PM (#574993)
    Now, is this something people would consider to be NOT pure end-to-end IP?

    That's correct, this is NOT pure end-to-end, and no, email is NOT a valid exception. What you're doing is preventing your customers from directly using mail servers other than your own. Regardless of your intentions, this could just as easily be viewed as a monopolistic act, and when it's done by large companies like AOL, becomes just that.

    I've recently had this done to me by Earthlink. I used to use my own mail server colocated at a hosting site to send outbound email. In the last few months, Earthlink, who I use as my local ISP, began blocking port 25, which means I can no longer access my own mail server to send email. I consider this unacceptable.

    Others replies in this article have mentioned other services that they can't use with their provider, for example VNC. I had to abandon IBM Global Network when they started using a proxy which prohibited me from using local names for machines on a remote network, i.e. it wouldn't allow web requests to remote machines defined in my hosts file, since the "transparent" IBM proxy didn't recognize the machine name in the HTTP GET packet, even though my machine was attempting to send the request directly to the correct IP address. This prevented me from running intranet applications remotely.

    I think there's a real risk is that the Internet will slowly devolve into a system which only allows communications on certain predefined ports, like 80, using predefined protocols, like HTTP. Already, we see systems that go to great lengths to package their communications into HTTP form in order to bypass firewalls and proxies. This will just create a stupid arms race in which people who want to abuse the network just get more creative about how they do that, while people who have legitimate uses will find the functionality they have available continually eroded.

    While it might be possible to pay more money to get the services you want, this has the potential to significantly impede development of future systems if not everyone has access to the same features. After all, I fervently hope that HTTP 1.1 is not the last word in communications protocols - but how will the next revolutionary replacement be developed if the inventors aren't allowed to send anything other than approved packets across the network to approved ports?

  • by ratkins ( 19145 ) on Thursday December 07, 2000 @10:11AM (#574994) Homepage
    Did it strike anyone else that these two stories are possibly the most important things to be posted to /. over the last few months?

    In one stroke they cover all of the perennial Slashdot themes: technology and good software engineering, freedom versus corporate control, government intervention, censorship (no hot grits though :-).

    These people are deciding who gets to "own" the information delivery system that will be more crucial than most people can imagine right now. It is incredibly important that we get it right the first time. The infinite wisdom of the designers of TCP/IP is now showing in that the network has scaled infeasibly well over thirty years later. There are ignorant, greedy people out there who want to fundamentally screw this up so they can make money out of it.

    Thank you to the original author by the way, for the excellent summary of the proceedings.

    For what it's worth, I'm all for Stockholm's model. IMHO bandwith is an infrastructure thing and like roads, sewers and electricity, should be provided and maintained by the state -- as by its nature it's most efficient to do these things in a monopoly fashion. I'd prefer to have shit broadband due to an incompetent local council than because an Evil Corporation had me by the balls.

    Cheers, Robert.
  • by paulio ( 24772 ) on Thursday December 07, 2000 @09:50AM (#574995)
    AT&T Cable prohibits VNC [att.com] on their network, at least from my work computer to my home computer. This probably comes under the "no servers" rule. It seems to be blocked at their firewall.

    My work's firewall prohibits VNC (or any direct connections) from my home computer to my work computer in the name of security.

    Now I have this really fast connection which has no value for telecommuting: remote control, file transfer, telnet, etc. Great! Maybe this can be fixed by a VPN, but that's just some other thing that I have to figure out rather than getting real work done.

  • by gaijin99 ( 143693 ) on Thursday December 07, 2000 @09:11AM (#574996) Journal
    Agree. That is the only killer argument against caching, or anothing else that can allow censorship.

    Not only do we need to fear government imposed censorship (as we already see in China), but also corporate imposed censorship. I can see, say, Verzion preventing packets containing anti-Verzion content from being passed.

  • by acceleriter ( 231439 ) on Thursday December 07, 2000 @11:16AM (#574997)
    That's pretty darned ironic, considering VNC was created at AT&T laboratories. I can't wait until the first time one of these cable companies gets smacked down for not filtering something, since they've taken it upon themselves to do that. You would think AT&T would know what "common carrier" means and the protections it provides.

    You may want to consider recompiling the source for VNC and running it on a higher numbered port, such as would be seen in passive ftp--this would be easier than setting up a VPN. All bets are off if they're actually doing packet inspection, which I doubt.

  • by Tau Zero ( 75868 ) on Thursday December 07, 2000 @09:32AM (#574998) Journal
    Is there a solution to maintenance bullying? Or will we need to forbid the physical line providers from providing service simply to insure that they don't abuse their maintenance monopoly to get customers away from everyone else?
    Yes, we will. If we can get a big enough political stink going over these documented abuses, maybe we can force the companies which own the wires (CWOTW) to divest the companies which deliver the content (CWDTC). But we have to start NOW.

    This is very similar to the way the phone company used to work. AT&T used to own everything, from the local loop to the long lines to the very phone on your wall. They had no real incentive to hold down costs, because they were guaranteed a slice of everything and a certain rate of return on investment. This led to enormous overcharging for long-distance service and nowhere near enough work on making the local loop cheap (because it was subsidized, and making it cheaper reduced the investment on which AT&T got its return). Separating the various functions led to enormous increases in choice for both long-distance and phone instruments, answering machines, voice mail and you-name-it.

    Cable companies don't have the guaranteed return which AT&T once had, but most of them do have local monopolies stemming from the fact that most cities only allowed one cable company per area. As I have noted before, it is just plain wrong to allow this accident of history to dictate what services can be obtained by subscribers in a particular area. The cable is there to deliver packets, and the cable company should be able to charge money for it. The cable companies ought not to be allowed to have any financial relationship to the companies which generate the packets, nor discriminate between them, any more than SWB should be allowed to discriminate between long-distance providers.
    "
    / \ ASCII ribbon against e-mail
    \ / in HTML and M$ proprietary formats.
    X
    / \

  • by moopster ( 119808 ) on Thursday December 07, 2000 @08:53AM (#574999)
    I am glad that there was dialog at the conference that promoted free speech as something of importance. I have been to similar conferences (not in topic), and people get sooooo lost in the financial (bottom line) aspects of such innovations that we end up building a system that sets the framework for true content control. This can only lead to new ?hate crime,? legislation that one day will never allow a packet with a naughty word imbedded in it move through the internet. Never give them the chance to regulate the content, it will only be abused by those with more power, and or money?. Just my $0.02.

    ----------
    No army can withstand the strength of an idea whose time has come.
  • by gaijin99 ( 143693 ) on Thursday December 07, 2000 @09:03AM (#575000) Journal
    For me one of the questions is: how does maintenance take place? If, for example, the cable company is the only one allowed to perform maintenance (and, frankly, having only one party perform maintenance sounds like a good idea to me) than what is to prevent them from delaying maintenance, or performing shoddy maintenance for people who use a non-cable company ISP?

    I ask because of the problems I've had with this. I live in Amarillo TX. I wanted to get DSL from a local ISP (ARNet). Naturally, only Southwestern Bell is allowed to work on the physical lines. Southwestern Bell also offers DSL service.

    I tried for more than two months to get DSL service. ARNet would place the order, and SWB would encounter a tiny problem and cancel my order without telling either ARNet or me. Eventually, two months after it started SWB canceled my order for the third time (I only found out because I called ARNet, and they called SWB, this is they way I found out about the pervious two cancelations) saying that I could never have DSL where I live because there was an "obstructor" on the trunk line.

    I have difficulty believing that I would have gotten this much hassle had I gone to SWB directly for my DSL.

    Finally, I gave up and am now getting broadband through COX cable.

    2600 had similar problems: http://www.2600.com/news/2000/1002.html [2600.com]

    Is there a solution to maintenance bullying? Or will we need to forbid the physical line providers from providing service simply to insure that they don't abuse their maintenance monopoly to get customers away from everyone else?

BLISS is ignorance.

Working...