
Shirky On P2P 97
There's an interview with Clay Shirky over at O'Reilly's OpenP2P network regarding P2P. Some of the piece is wordy ruminations over what peer to peer (and dear lord do I hate that term) is, and where it's going - but the most interesting part, IMHO, is the talking about web services and the changing definition of "client" and "server".
Peer to Peer (Score:1)
I don't know what the big fuss is all about. Hasn't the Internet always been peer-to-peer? Why even come up with a new name for it?
Dlugar
Not really - Client / Server communications... (Score:2)
I don't know what the big fuss is all about. Hasn't the Internet always been peer-to-peer? Why even come up with a new name for it?
Not really - try Client / Server instead. For instance, you don't send email to someone directly - instead you send it to you server, which then talks to another server, and the end client downloads the email from the server.
Browsers talk to servers - you are the client. FTP clients talk to servers. It goes on and on... most of the Internet has been (and probably will continue to be) Client - Server comminications, not Peer to Peer communications.
Re:Not really - Client / Server communications... (Score:1)
which then talks to another server, and the end client downloads the email from the server."
Uh, no. That's only since the advent of Dial-ups and NAT. The original plan was that the mail went directly from the host you were logged into to the host the recipient usually logged into.
Likewise, all the hosts ran FTP servers as well as clients.
Re:Not really - Client / Server communications... (Score:2)
It wasn't coined recently. It's been around forever. It's just that nobody ever bothered saying "Yes, this network is peer to peer..." or "...this one has a server."
I remember hooking up two Amigas with a parallel cable and running that nifty little tool called "Parnet". It was supposed to create a "Peer to Peer" network of
Not much of a network, but in a way it kind of resembles the internet of today. A kludge.
The first network I ever owned... (Score:2)
The funny thing is -- that everybody's acting like "Peer to Peer" is some recent buzzword that was created in this new age of unwashed idiot internet users.
Strange how that works, eh?
It wasn't coined recently. It's been around forever. It's just that nobody ever bothered saying "Yes, this network is peer to peer..." or "...this one has a server."
Concerning the Internet, yeah. For networking in general, the term was out there. Even MS was making the distinctions in thier manuals for Windows for Workgroups 3.11 (insert flashback to much worse days... WfW 3.11 *SHUDDER*) The term has been kicked around for a long time, but, now with this new 'peer to peer' networking thinggy on the Internet, it introduced quite a few people to the term. And, as someone else pointed out, Peer to Peer means Napster or some other evil technology these days it seems. *SIGH*
I remember hooking up two Amigas with a parallel cable and running that nifty little tool called "Parnet". It was supposed to create a "Peer to Peer" network of ... uh ... two computers.
hehehe - man I remember this. I built that stupid bidirectional parallel cable and hooked up the two 1000's. (IIRC, there was a slight difference between the 1000's and the 2000's parallel port that screwed hooking up to the 2000 - which had a bigger HD) I thought that was like the coolest thing... then realized I had almost nothing useful to do with it! ;-)
Client/Server vs. Peer to Peer? (Score:1)
Yeah, right ... I have a Gnutella client which I search through a network of servers. Interestingly enough, though, I also (but not everyone does!) run a Gnutella server myself on my computer, allowing other people to search and download from my own.
So what you're saying, in essence, is that more and more home computer are becoming servers--which was how the Internet was before the "home computers" started logging on.
Dlugar
Re:Not really - Client / Server communications... (Score:2)
For instance, you don't send email to someone directly - instead you send it to you server, which then talks to another server,
I don't do it that way -- I run my own mail services on my own Unix systems, with a static IP.
and the end client downloads the email from the server
If someone's using dial-up, that's their problem. I have no control over the messages once they're handed to the MX.
P2P (Score:1)
The uneducated only think of P2P as a way to rip stuff off of the man. We need another P2P killer app to make the general populace fall in love all over again... Otherwise, any P2P app will get stomped all over.
Re:P2P (Score:2, Interesting)
Slashdot effect? (Score:1, Insightful)
Re:Slashdot effect? (Score:1)
Someone post a link, if they can remember...
That's Freenet. (Score:1)
Wasn't there some article here the other day about some P2P network working better the more users were using it at once?
Are you looking for the Freenet Project [freenetproject.org]? Each Freenet agent ("clerver" sounds clumsy to me, and "servent" should have passed away with the 13th Amendment) retrieves documents from the closest user on the net, so as more people grab the latest Linux kernel [freenetproject.org], the path between each user and Linux 2.4.x becomes shorter and sending each copy creates less long-haul traffic.
Mojo Nation and other swarming apps (Score:2)
Many hands make light work...
Re:Mojo Nation and other swarming apps (Score:1)
terms (Score:2)
Me too - but not as much as I hate CRM, B2B, and all the other crap that's always sprayed across the front of InfoWorld.
Re:terms (Score:1)
I know the P2P term is not terrible accurate. I mean it's still client/server, nothing new indeed.
What is new IMHO is 'who' is using the technology.
The term P2P IMO reflects not what technology is used but who is using it: DNS-deprived, fixed-IP deprived and/or transient connected folks.
Perhaps it should be P2P as in: 'Person 2 person'
as Point 2 point is a little to obvious and non-descriptive.
SHIT (Sure Happy It's Thursday)
Re:terms (Score:2)
I thought that's what it was. Isn't PPP the abbrev'n for Point to Point Protocol?
Re:terms (Score:1)
P2P = Point 2 Point (application layer)
You're right, but I was implying that P2P can mean different things to different people. I see it
as more person-to-person technology.
Thanks.
Ah, so that's what it's about... (Score:1)
"One's having terrible trouble with one's gout"
or
"Have you heard about the disgraceful amount of cider Lady Thatcher was seen drinking last night in the members' lobby"
But alas, no.
I'm glad we cleared that one up.
Rise and Fall of a Buzzword (Score:2)
In comparison, I'd like to draw your attention to another recent buzzword/fad combination, the repeating rifle. Previous to its adoption as a standard military tool, it was employed by some individuals to great effect. Later, as one side adopted it wholesale, it gave an unbelievable advantage to a single army (The United States of Napster). Nowadays, however, everyone has a repeating rifle, thanks largely to the datahaven provided by the USSR... cheap AK-47's are available anywhere in the world, usually for about US $25. At this point it was a moot point, since the buzzword race had moved to nuclear weapons by this time.
Where my analogy breaks is I don't think
Not TLA (Score:2)
Client vs. Server (Score:1)
Initially, we have bare-bones, thin clients which are no more than glorified typewriters that can output their characters over copper wire to mainframes. The structure was very simple: if it could fit on a desk, it was a client. If it took a room to hold it, it was a server.
Then came the personal computer. Soon people were setting up their own web servers, their own FTP servers, etc. This, IMHO, is when the definition began to be blurred, as clients started behaving as servers and servers faded away.
Recently, though, the shift has changed back to a dedicated server because of the increasingly high demand necessary on upkeep. Joe Q. User doesn't want to be bothered to keep up with updates (Code Red, anyone?) and so he decides to let other people deal with it through proxy servers.
Everything runs in cycles; eventually, it will shift back, but for now, servers are here to stay.
Re:Client vs. Server (Score:1)
"It seems likelier to me that peer- to-peer will converge on standards pioneered by the Web services people, rather than on standards arising directly out of the peer-to-peer world."
That should be rephrased as
"It seems likelier to me that peer- to-peer will converge on standards pioneered by Microsoft and TimeWarner, rather than on standards arising directly out of the needs of the peer-to-peer client world."
Just my Humble Opinion
Just wondering... (Score:1)
I can see why the RIAA, MPAA et al. don't like it though.
And it goes marching on (Score:4, Interesting)
I do admit that the ultimate goal of the web services vision is admirable, but it seems to me to be just a bloated (UDDI+WSDL+SOAP+XMLSchema+HTTP(+P2P?)) version of what many software agent research groups have been after for years. Come on people, stop the insanity! There's gotta be a better way!
(and no, I didn't read the whole article, i had to stop and release the built up rant pressure before the insanity blew my head open. go ahead and mod me down for being an offtopic troll now.)
Re:And it goes marching on (Score:1)
This is not true, if DHCP and DNS is configured properly, the client can update the DNS server, so the DNS server is kept up to date.
NAT is a different story, though.
Re:And it goes marching on (Score:1)
Heck, one could argue that if your firewall is configured "properly", then all traffic can pass through it in both directions. But I haven't seen any firewalls configured that way, either.
Re:And it goes marching on (Score:1)
:)
Re:And it goes marching on (Score:1)
Agreed, it is not and often used feature, probably for security reasons.
if your firewall is configured "properly", then all traffic can pass through it in both directions
Doesn't that defeat the purpose of the firewall? Besides some sanity checks, firewalls are designed to block the flow of data to unwanted hosts and/or ports.
Re:And it goes marching on (Score:1)
Re:And it goes marching on (Score:1)
However, that relies on DN clients respecting the cache limits described by a DNS. My experience is that under Windows, not only does the OS cache DNS information without regard for caching hints, so does Explorer.
Re:And it goes marching on (Score:1)
Ah! That Old Misconception again... (Score:1)
It's precicely BECAUSE of the absence of a over-arching design that the net has been and will continue to be so successful. I agree that's a mite hard to grasp when people are used to devoting their lives to designing elegant systems (and/or are attracted to the M$ concept of centralised control), but I believe it to be true.
In design terms the net is a singularity - it needs to have a degree of chaos at its heart to function.
Web services ARE client/server (Score:1)
If I want to send a letter the fastest way is for me to take it to the post office - not to hand it to someone randomly on the street in the hope that eventually it will be handed to someone who is going to the post office.
Re:Web services ARE client/server (Score:1)
The client/server issue is not one of whether C/S architectures are good -- they obviously are -- but about whether 'client' and 'server' are permanent or temporary designations. On the web, browsers and clients and servers are servers, while in Napster or OpenCOLA or Groove, the nodes are both.
So all I am saying about Web Services is that there are a lot of things we can do when both halves of a client/server relationship can get and parse XML, and that models for this kind of two-way communication are closer to P2P than to the classic Web.
-clay
Re:Web services ARE client/server (Score:1)
Not sure I agree with that wholly. A client is always temporary, the server is fixed. A web service is exactly that - a service. In P2P we are equals, I can serve you, you can serve me. When I connect to a web site there is no possibility that I start serving web sites to it. I agree that there maybe a converstation as it were, but that does not equal P2P.
I agree with you that there are lots of uses for a P2P system, however, web services are not one of them. I'm also not convinced that just because a system uses XML it is P2P. The two are not synonymous.
I suppose a lot of this depends on what your definition of web services is. I am presuming (perhaps incorrectly) that you mean things that are accesses using a web page. If I'm wrong in this then please ignore all my points as they don't make sense anymore
Re:Web services ARE client/server (Score:1)
Your letter, if we talk about email, provides a good example. Take a look at the headers of your next message. It has been handled by a series of cooperating hosts completely unknown to you. The route your letter takes can change according to various circumstances. You may always hand your mail off to the same host initially, but where it goes after that is not up to you. The network decides the "cheapest" route and gets it to the receiver without any input from your end.
In that way, peer to peer is not really new or even that exciting-- the underlying internet protocols have used similar concepts to provide reliable end to end communication through an ill-defined swarm of hosts for a long time.
We can always reach the host we want, but not always through the same path. Likewise, once we have standardized some interfaces, we may always be able to retrieve the data we want, but not necessarily always from the same host. We're just moving an old, good idea further up the protocol stack.
So what happens when we want to store something that no one cares about except us? It's one thing to have multiple independent sources of something everyone wants, but how many machines are interested in warehousing some guy's letter to his girlfriend? (Not a good letter-- a mushy boring letter.)
Re:Web services ARE client/server (Score:1)
So what happens when we want to store something that no one cares about except us?
I think we are in agreement, there is no need to make everything peer-to-peer. In the end P2P is kind of client/server/server/client. It is just the same technology used twice. If that's what's needed then great, otherwise why waste the bandwidth?
Re:Web services ARE client/server (Score:1)
Still, every time our CAD operators start to bump against the storage limits on our file server, I start to think about all the unused drive space on their client machines. They have 5-10 GB apiece I am always telling them not to use because it's not secure and doesn't get backed up.
In the "wouldn't it be great" department, it would be nice if I could organize their unused space and processor cycles into a kind of "storage hive" which would appear to them (and my backup scripts) as a network file system. We have the LAN bandwidth to spare, but I am not sure how much cost would be associated with leaving all the workstations on all the time.
Then there are the problems associated with the reliability of a certain operating system they are all running. It seems to self destruct if you don't reboot it fairly regularly.
I think this differs from a SAN (storage area network) in some subtle but important ways, and it's as close as I can get to a "killer P2P app" for business.
Essentially, ditch the server and distribute its tasks among the under utilized clients. The whole server is replaced with a cluster of processes running on client machines.
Sounds great to me, but I don't think it can be built with the pieces we have today. I am not about to bet the bank on a swarm of Win98 boxen...
As FreeNet progresses, I find myself wondering how hard it would be to write a client for it that would present a CIFS (Samba, SMB) interface to applications.
Re:O'Reilly floggin P2P for all its worth (Score:1)
opennap is stronger than Napster ever was.
thats not even to mention the abundance of decentralized systems.
Out of touch people should not post
Another definition? (Score:3, Funny)
Would that be considered "Pier to Pier Networking"?
- Freed
Web services my arse (Score:2)
Web Services: How different from the Web? (Score:4, Informative)
There are growing criticisms of the consensus vision of web services -- http / SOAP / WSDL / UDDI -- largely on the grounds that its complexity is un-web-like, and that there are uninvented and possibly uninventable layers required above UDDI for any two arbitrary applications to be able to find each other in the dark.
Dave Winer of Userland [userland.com], inventor of XML-RPC and co-designer of the SOAP spec, advocates an embrace of these two protocols by the Open Source movement as a lightweight way to advance the battle for interoperability. (Dave's ideas in many ways answer the Will Open Source Lose the Battle for the Web? [earthweb.com] article form earlier this month.)
Another group, in line with your "Apache is all we need" idea, has taken Roy Fielding's idea of the REST (REpresentational State Transfer) architecture as a way to extend existing web semantics furhter into the domain of applications. They have started a RESTWiki [conveyor.com] to expand on those ideas.
This is all a big mess right now, with no obvious clarity coming any time soon, but two things we can be certain of are that experiments with application-to-application traffic is going to increase dramatically in the next 12 months, whatever the framework, and that with MSFT driving this idea as part of
-clay
Re:Web Services: How different from the Web? (Score:2)
Sorry, I just don't see it. I think it's marchitecture designed by people who took Wired magazine a little too seriously. "Experiments with application-to-application traffic"? Do me a favour! It smells like a dead duck, it looks like a dead duck, and frankly it quacks like one, too.
Time will tell - personally, I reckon this is the first 'VRML' of the 21st Century.
Re:Web Services: How different from the Web? (Score:2)
This reply is a little late but this question seemed so strange to me that I couldn't help but answer it.
First, you discredit single sign-on but if you are a typical end-user with a dozen subscriptions to a dozen sites, single sign-on is a big deal and a massive improvement over sticky notes on your monitor.
Second, if you are selling something, wouldn't you want your product specifications (and in some cases your price) disseminated as widely as possible? If you sell digital cameras, you would absolutely love it if digital camera sites can easily incorporate your product specifications into their site.
People already do this stuff. How do you think the non-Slashdot Slashboxes work? They are XML-based web services (not SOAP, but web services nevertheless). The hype around web services is just about standardizing the information sharing that organizations already do. One you standarize it, you build network effects and economies of scale.
Is this some sort of eCommerce thing? And if so, why should competitors want to share customer data?
Competitors don't. Partners do. Anyhow, you seem a little obsessed with single sign-on and user data. User data is just one kind of data worth sharing. There is also product data. Realtime product availability information.
Re:Web Services: How different from the Web? (Score:2)
I'll address your points in order.
Firstly, I /am/ a fairly typical end-user, and I do what 95% of end-users do: I have a small number of username/passwords which I have been recycling for the past four or five years. Obviously you don't want to do this for anything super-sensitive, but who in their right minds entrusts super-sensitive personal data to a third party?
Secondly: if I'm selling something on the web, then my specifications and pricing data are /already/ as widely distributed as possible (routing glitches permitting.) That's what the first two "W"s in "WWW" stand for! I definitely wouldn't want third parties snarfing product data from my site and selling on to consumers: I'd be losing all that interesting marketing and demographic data. If "DigitalCameraNews.com" o whatever wants to link to my site, fine, I'm delighted.
Slashboxes? Does anyone actually use them? I don't. Perhaps I'm not that typical an end-user after all... my preferred sources of news are not a subset the given slashboxes: slashboxes are all very kewl, but they don't really scale very well.
Realtime product availability data comes from the vendors own stoick control systems. You really think Argos (a typical UK consumer goods reseller) are going to entrust their stock control to manufacturers? I just don't think it's going to happen on any significant scale.
Returning to my VRML example: were you around when VRML was being hyped as the future of the Net? Just because something is technically possible, doesn't mean it'll work in the real world. (See WAP for another example: if you want to make pots of money, short stocks of telcos who spent a fortune on 3G licenses now, cos they'll be bankrupt or taken over within five years. )
Lots of time and money was put into demonstrating that the whole idea just wasn't going to fly. I'm open to the remote possibility that I'm wrong about this ;) but I'm not holding my breath. I guess only time will tell.
Re:Web Services: How different from the Web? (Score:2)
The only question is whether we all hand-roll our own solutions or have standards for doing it. XML is the structured data specification and web services are about how to move the information from place to place.
VRML was totally different because it was about building customer demand for a new product (3D). Sharing structured information is not new. Most web-connected organizations do it in one way or another today. Just as the Internet glued together the disparate proprietary networks, the web services standards try to glue together disparate information sources.
If you want to argue against the necessity for particular standards, we could have an interesting discussion. But you seem to want to deny that organizizations need to share information in real time. Or that we need standards to lower the cost of doing that.
Re:Web Services: How different from the Web? (Score:2)
This certainly does /not/ mean that a significant number of /web sites/ need to share data. Commercial partners: yes. Web sites? nah. EDI is expensive partly because it doesn't run over the internet - let alone over HTTP. There are very good reasons for this...
VRML wasn't purely marketing driven. Everyone who jumped aboard the bandwagon had an angle on how it was going to be great, including many intelligent dedicated code hackers and highly technical people. In the same way, I think the 'web services' meme clicks with a lot of developers who've had to reinvent the wheel over and over again. I fail to see how this can drive it's use, though; companies stay prosper or file chapter 11 on other criteria than whether the developers have a fun time building the systems.
Perhaps I have misunderstood what is meant by "web services". A backend database replication or stock updating system is nothing to do with the web except that the data /may/ end up used on a site. But that's nothing new: how does Slashdot (or ZDNet, for that matter) get their content? How does Amazon get it's content?
Re:Web Services: How different from the Web? (Score:2)
Okay now we are converging.
"Web Services" don't necessarily have anything to do with "Web Sites" except insofar as most modern development is exposed through a website in one way or another.
Web services are called Web Services because they use Web standards like HTTP and XML. Plus it just gets the VCs excited to prefix anything with the word "Web" (or it did a couple of years ago when the term was coined). Essentially they are distributed computing protocols built on top of existing Internet protocols.
Even so, some web SITES do want to share structured information. Check out meerkat.oriellynet.com. Slashdot probably gets its slashbox content using a system similar to meerkat. Google gets XML content from the Open Directory Project.
Also, one point that Clay Shirky made is that there are some radicals who think that HTTP is the only protocol you need to do structured information interchange -- whether you are doing "Web site" stuff or not, they claim you should expose your structured data through HTTP(s). (even if you only give the password to your partners)
EDI is expensive partly because it doesn't run over the internet - let alone over HTTP. There are very good reasons for this...
Many people disagree with you. Most EDI people are running towards the Internet as quickly as possible. That's the whole point of ebXML. Maybe value added networks will survive for security and quality of service but non-IP protocols are dead.
Re:Web services my arse (Score:2)
So 'web services' are only of any use to developers? (I'm assuming that that a user who is capable of scripting some sort of RDF parser to pop up the latest /. headlines on their desktop comes under the heading 'developer'.) Users of such applets are still just Users. And let's face it, you don't need a fancy XML framework to enable a script to go grab the latest /. stories: that's what LWP::Simple and regular expressions are for ;)
WRT the 'time spent sifting HTTP data' argument... hmmm. Personally, I find that the process of sifting is what the Web experience is all about. Intermediaries such as Slashdot aggregate others' content and provide links and commentary; I trust some more than others, but I don't read every /. story (by any means). I'm more than happy for skilled human editors *cough* to play the part of intermediaries; Slashdot of course allows anyone to submit a story and become an editor themelves. /THAT/ is what the web experience is all about: many to many, via an arbitary subset of mediators. I'm sure there's an apposite quotation from The Cluetrain Manifesto [cluetrain.com] at this point, but I don't have the book with me right now...
A new P2P idea? (Score:1)
Search engines are having a harder and harder time spidering the web as time goes on. There are more and more sites to spider and more and more indexed sites to update. This seems like a perfect application for a peer to peer application - kinda like gnutella except only the indexing information would be held by the nodes rather than the data itself... There are lots of idle machines out there with idle bandwidth - we could create a peer to peer solution where each node spiders a minute part of the web. When it's done, it could send the info to it's immediate peers so that a node being down doesn't hide the index. A search could be done using a similar technique to gnutella but the advantage would be that the system could index all the information that is currently out there rather than the items stored in the application. Does anyone know if anything like this has been done, or is even feasible?
Re:A new P2P idea? (Score:1)
And an article about them in zdnet [zdnet.com] from earlier this year
Services (Score:1)
My understanding is that a 'server' is a machine that offers 'services'. So if you connect to a machine that is offering a service it is - by definition - a server.
If I'm right then we'll all have to agree that P2P is an empty internet buzzword that has no meaning.
Even with Napster it wasn't really P2P. You were getting MP3's from a machine 'serving' them. That makes it a server.
Maybe P2P should be renamed to NEN, or 'normal everyday networking'.
Claric
Difference between P2P and C/S (Score:1)
You were getting [files] from a machine 'serving' them. That makes it a server.
Why can't a server also act as a client? We need to get away from the idea that a "server" must always have a permanent, high-throughput connection with three or more nines reliability at the edge of the network.
Even with Napster it wasn't really P2P. You were getting [files] from a machine 'serving' them. That makes it a server.
But that machine was also 'getting' files from other servers. Read the article. In a client-server system, the client and server have relatively fixed roles, whereas in a distributed or "peer-to-peer" system, the clients and servers exchange roles frequently or even concurrently, preferably even when the agents are on transient network connections. For example, I may be downloading the latest Linux kernel from user foo while serving the latest XFree86 release to user bar.
Re:Difference between P2P and C/S (Score:1)
Tell it to the ISPs (Score:1)
All it is that makes a server a server is it running services.
OK. I guess I was bitching to ISPs who think "server" is "something you pay $$$$$$ per month to run" and not something that the average consumer can run.
Client/Server Definitions (Score:1)
Furthermore, "Peer-to-Peer" infers an equality relationship, which in turn denies the client-server heirarchical model. Hence, the necessity to revisit terms that no longer fit the previously typical standard.
SOAP (Score:2)
.NET, Mono, Open Solution (Score:1)
Let's hope that's the way the industry goes.
It's actually very exciting, if your ISP doesn't start charging you a P2P account, like they charge for a Presence on Web account
Flamesuit and thinking cap ON. (Score:2)
example:
[customer machine]
(connects too)
[SUPER NODE:file location cache,server cache,new anonymous name]
(connects too)
[gnutella network cloud.]
Slogan: Garrunteed anonimity barring court order, instant connects, instant searches.
Hey anonimity might work for ISP's too, but no infrastructure would be neccesary for the big cache service, just bandwidth.
PS:cmdrtaco "Lameness filter encountered. Post aborted!" was trigered by an ASCII network diagram.
Fluffy computing? (Score:1)
Unfortunately, no one seems to be able to come up with many concrete applications of this idea (other than a few mentioned, like music sharing). The reason is that most of the functions we expect from a network are based on client-server technology. There seems to be this hazy notion of "conversations" between machines, in the same way that two friends could have a chat, and this is not the kind of fluffy computing model that I'm looking for when I visit my bank online to pay my bills.
I definitely agree that the HTTP protocol is ill-equipped to deal with the demands of today's network applications, but the solution is to standardize a better protocol, not throw out the entire model.
-- Brett
the web vs. the net (Score:1)
You know, there's a perfectly good word for "the publicly accessible Internet": the Net. The Web is a bunch of html pages hyperlinked together. The Net is that plus everything else. It's nice to be able to express oneself with precision, and if "the Web" becomes a blanket phrase for the entire net, we'll lose a little.
It's economic and political, not technical (Score:2)
There are major economic forces, forces so big that they are (hopefully temporarily) superceding the US Constitution, that are effectively trying to turn the Internet into a big broadcast medium. Essentially, to a media mogul used to TV and Radio, every electronic distribution means ought to look like TV and Radio. (Kinda like the old hammer/nail thing)
Centralized focus means ease of control. It means you can easily go after an ISP for content posted on their servers. The lawyers can wield a big OFF button.
Peer-to-peer is much more difficult to police, though it sounds as if they're trying against Gnutella.
But then realize just WHO runs the cable ISPs, and then take a look at their TOS, and it's immediately obvious WHY. Aside from not having adequate amounts of the correct competence to run a data network, they know that personal servers and peer-to-peer are more difficult to control. Therefore, "No servers for the use of others" is the most common rule on Cable. Note that DSL is generally more open, and that fits with the parent organization being a non-content-owner.
But as a cable subscriber with no hope of DSL, peer-to-peer is beyond my reach.
So...
We need a peer-to-peer proxy, for two reasons.
First, it lets me connect out of cable, and once connected to the proxy, it lets me act as a peer. If the cable companies got a little more enlightened, they might even let run the proxy themselves. (Yeah, right! Who wants to wait?)
Second, as Code Red has shown, with default Microsoft security and Joe Sixpak running his home PC, the Internet simply isn't a safe place. For the most part, perhaps ISPs should allow NO incoming connections, by default*. A peer-to-peer proxy would be the only thing keeping the concept generally viable, in that case.
(*) By the same token, they should allow the knowledgable user to open ports. (Again, fat chance!)
Making peer to peer go away (Score:2)
RedBack has a model for DSL that the telcos love.
The telcos have tried this business model before, with X.25 (an overpriced flop), Minitel (an overpriced flop in the US), ISDN (an overpriced flop in the US), and 900 numbers (an overpriced success, but only for porn.) Here we go again.
None of this would go anywhere except that in the US, DSL is becoming an unregulated monopoly. This gives monopoly telcos the power to force this on their customers.
Very interesting... (Score:1)
However, it contained the dumbest phrase I have read in weeks, clearly proof that the smartest people make silly mistakes from time to time:
Well, if it's inevitable, then by definition trying to avoid it is pointless (not just "probably pointless", pointless).
Sorry, I couldn't resist pointing this out.
--Q
web services, big deal (Score:2)
Let's look at CPAN for a second. Here's how you run a CPAN site: cd to a public ftp directory and wget --recursive ftp://some-cpan-url. The smart client figures out the rest, and people can use dumb ftp clients too.
Here's how slashdot disseminates its feeds in XML and RDF and HTML: you grab it from a URL, and the webserver shovels it at you, blithely ignorant of the semantic meaning of the bits it's transferring around.
In the magical world of webservices, you now get to write special methods on the server end, configure the server to invoke them, and in general ensure that you don't interoperate with anything. Oh yes, you also get to classify the whole system with some big bureaucratic UDDI schema that is supposed to describe it all to any capable client, as if you didn't already write the client to work with this domain-specific protocol already.
All this might be great for intranet apps
Re:web services, big deal (Score:2)
Welcome to web services.
There's a lot of possible layers there. Yes you can use SOAP and WSDL if you want to, but at it's simplest layer you're providing a service of information (rather than HTML) over the internet (doesn't have to be over HTTP). Slashdot is providing a simple web service in its RSS feed. I think you'll see a lot more people coming to the conclusion that they don't need SOAP and WSDL. Straight XML served over HTTP perhaps with parameters in the querystring is an excellent way to deliver web services.
I have a paper on why this is an excellent way to do things that I gave at the open source conference, but my web server is offline right now and won't be back up for about 4 weeks (due to the wonderfulness of British Telecom). When it's back, you'll be able to find it at http://axkit.org/docs/presentations/tpc2001/