Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
News

Shirky On P2P 97

There's an interview with Clay Shirky over at O'Reilly's OpenP2P network regarding P2P. Some of the piece is wordy ruminations over what peer to peer (and dear lord do I hate that term) is, and where it's going - but the most interesting part, IMHO, is the talking about web services and the changing definition of "client" and "server".
This discussion has been archived. No new comments can be posted.

Shirky On P2P

Comments Filter:
  • I don't know what the big fuss is all about. Hasn't the Internet always been peer-to-peer? Why even come up with a new name for it?



    Dlugar
    • I don't know what the big fuss is all about. Hasn't the Internet always been peer-to-peer? Why even come up with a new name for it?


      Not really - try Client / Server instead. For instance, you don't send email to someone directly - instead you send it to you server, which then talks to another server, and the end client downloads the email from the server.


      Browsers talk to servers - you are the client. FTP clients talk to servers. It goes on and on... most of the Internet has been (and probably will continue to be) Client - Server comminications, not Peer to Peer communications.


      • "For instance, you don't send email to someone directly - instead you send it to you server,
        which then talks to another server, and the end client downloads the email from the server."

        Uh, no. That's only since the advent of Dial-ups and NAT. The original plan was that the mail went directly from the host you were logged into to the host the recipient usually logged into.

        Likewise, all the hosts ran FTP servers as well as clients.
      • The funny thing is -- that everybody's acting like "Peer to Peer" is some recent buzzword that was created in this new age of unwashed idiot internet users.

        It wasn't coined recently. It's been around forever. It's just that nobody ever bothered saying "Yes, this network is peer to peer..." or "...this one has a server."

        I remember hooking up two Amigas with a parallel cable and running that nifty little tool called "Parnet". It was supposed to create a "Peer to Peer" network of ... uh ... two computers.

        Not much of a network, but in a way it kind of resembles the internet of today. A kludge.
        • The funny thing is -- that everybody's acting like "Peer to Peer" is some recent buzzword that was created in this new age of unwashed idiot internet users.


          Strange how that works, eh?


          It wasn't coined recently. It's been around forever. It's just that nobody ever bothered saying "Yes, this network is peer to peer..." or "...this one has a server."


          Concerning the Internet, yeah. For networking in general, the term was out there. Even MS was making the distinctions in thier manuals for Windows for Workgroups 3.11 (insert flashback to much worse days... WfW 3.11 *SHUDDER*) The term has been kicked around for a long time, but, now with this new 'peer to peer' networking thinggy on the Internet, it introduced quite a few people to the term. And, as someone else pointed out, Peer to Peer means Napster or some other evil technology these days it seems. *SIGH*


          I remember hooking up two Amigas with a parallel cable and running that nifty little tool called "Parnet". It was supposed to create a "Peer to Peer" network of ... uh ... two computers.


          hehehe - man I remember this. I built that stupid bidirectional parallel cable and hooked up the two 1000's. (IIRC, there was a slight difference between the 1000's and the 2000's parallel port that screwed hooking up to the 2000 - which had a bigger HD) I thought that was like the coolest thing... then realized I had almost nothing useful to do with it! ;-)

      • Yeah, right ... I have a Gnutella client which I search through a network of servers. Interestingly enough, though, I also (but not everyone does!) run a Gnutella server myself on my computer, allowing other people to search and download from my own.

        So what you're saying, in essence, is that more and more home computer are becoming servers--which was how the Internet was before the "home computers" started logging on.


        Dlugar
      • For instance, you don't send email to someone directly - instead you send it to you server, which then talks to another server,



        I don't do it that way -- I run my own mail services on my own Unix systems, with a static IP.



        and the end client downloads the email from the server



        If someone's using dial-up, that's their problem. I have no control over the messages once they're handed to the MX.

  • Until we can get rid of the nasty stigma Napster gave P2P...
    The uneducated only think of P2P as a way to rip stuff off of the man. We need another P2P killer app to make the general populace fall in love all over again... Otherwise, any P2P app will get stomped all over.
    • Re:P2P (Score:2, Interesting)

      by GlassUser ( 190787 )
      Well email WAS peer to peer, way back in the day when you would telnet into a server (or just use a serial cable with a dumb terminal), you had POP and SMTP right there. Later in the day, it would dial up with whatever peer was scheduled, and belch out its mail, get fresh mail, and sort it. Repeat for the next host. It's only recently that we have the luxury of most servers having high-speed always on connections.
  • Slashdot effect? (Score:1, Insightful)

    by Anonymous Coward
    How would the /. effect work on a P2P network?
    • Wasn't there some article here the other day about some P2P network working better the more users were using it at once?

      Someone post a link, if they can remember...

      • Wasn't there some article here the other day about some P2P network working better the more users were using it at once?

        Are you looking for the Freenet Project [freenetproject.org]? Each Freenet agent ("clerver" sounds clumsy to me, and "servent" should have passed away with the 13th Amendment) retrieves documents from the closest user on the net, so as more people grab the latest Linux kernel [freenetproject.org], the path between each user and Linux 2.4.x becomes shorter and sending each copy creates less long-haul traffic.

      • I think that the sort of systems you are talking about are the so-called "swarming" systems. Pioneered by Mojo Nation [mojonation.net], these systems break content up into lots of pieces which can be served in parallel, speeding up transactions as the network gets larger. Other swarming apps are Swarmcast, EDonkey2000, and it looks like Centerspan is going to be pushing a swarming app soon.

        Many hands make light work...

  • by Satai ( 111172 )
    (and dear lord do I hate that term)

    Me too - but not as much as I hate CRM, B2B, and all the other crap that's always sprayed across the front of InfoWorld.

    /me thinks the 'free subscription' was just a ploy to dumb him down with acronyms.
    • Go ahead and mod me down as 'redundant or offtopic', but:

      I know the P2P term is not terrible accurate. I mean it's still client/server, nothing new indeed.

      What is new IMHO is 'who' is using the technology.
      The term P2P IMO reflects not what technology is used but who is using it: DNS-deprived, fixed-IP deprived and/or transient connected folks.

      Perhaps it should be P2P as in: 'Person 2 person'
      as Point 2 point is a little to obvious and non-descriptive.

      SHIT (Sure Happy It's Thursday)
      • Perhaps it should be P2P as in: 'Person 2 person' as Point 2 point is a little to obvious and non-descriptive.

        I thought that's what it was. Isn't PPP the abbrev'n for Point to Point Protocol?

        • PPP = Point to Point Protocol (Routing layer)
          P2P = Point 2 Point (application layer)

          You're right, but I was implying that P2P can mean different things to different people. I see it
          as more person-to-person technology.

          Thanks.
  • For a while I thought 'Peer-to-peer' was all about discussions between crusty old duffers in the House of Lords:

    "One's having terrible trouble with one's gout"

    or

    "Have you heard about the disgraceful amount of cider Lady Thatcher was seen drinking last night in the members' lobby"

    But alas, no.

    I'm glad we cleared that one up.
  • Its clear to me that the 'p2p' fad as begun to slip away, through a combination of people realizing it was nothing new and TLA's realizing that that part of the natural order must be crushed in order to maximize profits. (Thats right people, be afraid... they don't want to *ensure* profits, they want to *maximize* them, so throw any images of altruism out the window please).

    In comparison, I'd like to draw your attention to another recent buzzword/fad combination, the repeating rifle. Previous to its adoption as a standard military tool, it was employed by some individuals to great effect. Later, as one side adopted it wholesale, it gave an unbelievable advantage to a single army (The United States of Napster). Nowadays, however, everyone has a repeating rifle, thanks largely to the datahaven provided by the USSR... cheap AK-47's are available anywhere in the world, usually for about US $25. At this point it was a moot point, since the buzzword race had moved to nuclear weapons by this time.

    Where my analogy breaks is I don't think .NET and 'internet services' are anywhere near nuke significance. Someone else must be coming up with something... somewhere...

  • With respect to the growing debates over "client" versus "server", I think that in the future we'll see a much larger shift back towards the "server" side. Let's look at the history:

    Initially, we have bare-bones, thin clients which are no more than glorified typewriters that can output their characters over copper wire to mainframes. The structure was very simple: if it could fit on a desk, it was a client. If it took a room to hold it, it was a server.

    Then came the personal computer. Soon people were setting up their own web servers, their own FTP servers, etc. This, IMHO, is when the definition began to be blurred, as clients started behaving as servers and servers faded away.

    Recently, though, the shift has changed back to a dedicated server because of the increasingly high demand necessary on upkeep. Joe Q. User doesn't want to be bothered to keep up with updates (Code Red, anyone?) and so he decides to let other people deal with it through proxy servers.

    Everything runs in cycles; eventually, it will shift back, but for now, servers are here to stay.

    • Quote a comment in the article

      "It seems likelier to me that peer- to-peer will converge on standards pioneered by the Web services people, rather than on standards arising directly out of the peer-to-peer world."

      That should be rephrased as

      "It seems likelier to me that peer- to-peer will converge on standards pioneered by Microsoft and TimeWarner, rather than on standards arising directly out of the needs of the peer-to-peer client world."

      Just my Humble Opinion
  • Why does Hemos dislike the term "peer-to-peer"? (P2P) I find it descriptive, especially since it seems that all involved in such a network are on the same level; in other words, peers.

    I can see why the RIAA, MPAA et al. don't like it though.
  • by kurowski ( 11243 ) on Thursday August 23, 2001 @12:09PM (#2208517) Homepage
    I never cease to be amazed by how much effort is put into creating new ill-conceived technologies to work around old ill-conceived technologies. For example,
    • because we chopped up the IP address space based on byte boundaries rather than bit boundaries, an artificial scarcity was created that led (in part) to the widespread use of DHCP and NAT
    • DHCP and NAT arguably broke DNS and prevented people from running traditional server processes on their boxen so we created P2P software
    • due to the numerous security problems that surface (due primarily to misconfigurations) we invent firewalls that block traffic
    • due to firewalls blocking everything but HTTP, we invent a whole new protocol stack on top of HTTP (i.e. SOAP)
    and so on and so on... I'd include the push of XML to "fix" the problem of differing binary data formats, and the creation of XML Schemas to make up for the lost type information in all those mismatching DTDs and so on. But you all get the point.

    I do admit that the ultimate goal of the web services vision is admirable, but it seems to me to be just a bloated (UDDI+WSDL+SOAP+XMLSchema+HTTP(+P2P?)) version of what many software agent research groups have been after for years. Come on people, stop the insanity! There's gotta be a better way!

    (and no, I didn't read the whole article, i had to stop and release the built up rant pressure before the insanity blew my head open. go ahead and mod me down for being an offtopic troll now.)

    • DHCP and NAT arguably broke DNS and prevented people from running traditional server processes on their boxen so we created P2P software


      This is not true, if DHCP and DNS is configured properly, the client can update the DNS server, so the DNS server is kept up to date.

      NAT is a different story, though.

      • if DHCP and DNS is configured properly, the client can update the DNS server, so the DNS server is kept up to date.
        While this is true, I've never seen a network with DHCP and DNS configured to do this (and I've been lots of places with DHCP). Hence my inclusion of it in my tirade.

        Heck, one could argue that if your firewall is configured "properly", then all traffic can pass through it in both directions. But I haven't seen any firewalls configured that way, either.

        • Our network here at work does just this -- anyone can ping ANY device on the network, DNS or DHCP.

          :)
        • I've never seen a network with DHCP and DNS configured to do this (and I've been lots of places with DHCP).

          Agreed, it is not and often used feature, probably for security reasons.


          if your firewall is configured "properly", then all traffic can pass through it in both directions

          Doesn't that defeat the purpose of the firewall? Besides some sanity checks, firewalls are designed to block the flow of data to unwanted hosts and/or ports.

          • Doesn't that defeat the purpose of the firewall? Besides some sanity checks, firewalls are designed to block the flow of data to unwanted hosts and/or ports.
            Yes. That's why I used "properly". Everything that firewalls do can be done better by applying the policy where it belongs (i.e. the router or host), so the only way to "properly" configure one is to open it up.
      • if DHCP and DNS is configured properly, the client can update the DNS server

        However, that relies on DN clients respecting the cache limits described by a DNS. My experience is that under Windows, not only does the OS cache DNS information without regard for caching hints, so does Explorer.

        • I haven't noticed this problem. Probably because the DHCP server likes to give the same IP address to each machine all the time, which is a good thing if the clients are caching DNS information.
    • If only I had a fiver for every time somebody has posted to some forum saying words to the effect of "If only the net had been designed *properly* from the start - we'd never have to implement all these kluges!"

      It's precicely BECAUSE of the absence of a over-arching design that the net has been and will continue to be so successful. I agree that's a mite hard to grasp when people are used to devoting their lives to designing elegant systems (and/or are attracted to the M$ concept of centralised control), but I believe it to be true.

      In design terms the net is a singularity - it needs to have a degree of chaos at its heart to function.

  • Shirky seems to be suggesting that web services are to be outmoded by his new vision of peer to peer. Peer to peer has its place but for efficiency many things require a centralised location for dealing information.

    If I want to send a letter the fastest way is for me to take it to the post office - not to hand it to someone randomly on the street in the hope that eventually it will be handed to someone who is going to the post office.
    • I didn't mean to suggest that WS are going to be outmoded by P2P -- in fact, I think I said that P2P was going to go away as a term while Web Services will become more important.

      The client/server issue is not one of whether C/S architectures are good -- they obviously are -- but about whether 'client' and 'server' are permanent or temporary designations. On the web, browsers and clients and servers are servers, while in Napster or OpenCOLA or Groove, the nodes are both.

      So all I am saying about Web Services is that there are a lot of things we can do when both halves of a client/server relationship can get and parse XML, and that models for this kind of two-way communication are closer to P2P than to the classic Web.

      -clay
      • ... but about whether 'client' and 'server' are permanent or temporary designations. ...

        Not sure I agree with that wholly. A client is always temporary, the server is fixed. A web service is exactly that - a service. In P2P we are equals, I can serve you, you can serve me. When I connect to a web site there is no possibility that I start serving web sites to it. I agree that there maybe a converstation as it were, but that does not equal P2P.

        I agree with you that there are lots of uses for a P2P system, however, web services are not one of them. I'm also not convinced that just because a system uses XML it is P2P. The two are not synonymous.

        I suppose a lot of this depends on what your definition of web services is. I am presuming (perhaps incorrectly) that you mean things that are accesses using a web page. If I'm wrong in this then please ignore all my points as they don't make sense anymore :-)
    • Any transaction can be thought of as "client/server". The difference here is that client and server are changing roles as needed.

      Your letter, if we talk about email, provides a good example. Take a look at the headers of your next message. It has been handled by a series of cooperating hosts completely unknown to you. The route your letter takes can change according to various circumstances. You may always hand your mail off to the same host initially, but where it goes after that is not up to you. The network decides the "cheapest" route and gets it to the receiver without any input from your end.

      In that way, peer to peer is not really new or even that exciting-- the underlying internet protocols have used similar concepts to provide reliable end to end communication through an ill-defined swarm of hosts for a long time.

      We can always reach the host we want, but not always through the same path. Likewise, once we have standardized some interfaces, we may always be able to retrieve the data we want, but not necessarily always from the same host. We're just moving an old, good idea further up the protocol stack.

      So what happens when we want to store something that no one cares about except us? It's one thing to have multiple independent sources of something everyone wants, but how many machines are interested in warehousing some guy's letter to his girlfriend? (Not a good letter-- a mushy boring letter.)
      • I appreciate the P2Pness of email. I'll agree that email is peer-to-peer. What I was saying was that just because peer-to-peer has its uses does not make it the be all and end all. It is simply another tool in the kit, we can use it where appropriate; there are many situations where it is not appropriate.

        So what happens when we want to store something that no one cares about except us?

        I think we are in agreement, there is no need to make everything peer-to-peer. In the end P2P is kind of client/server/server/client. It is just the same technology used twice. If that's what's needed then great, otherwise why waste the bandwidth?
        • I think we're looking at this from pretty much the same viewpoint. On the one hand, it's exciting to think about, but on the other hand it's hard to see this as being immediately useful in most environments.

          Still, every time our CAD operators start to bump against the storage limits on our file server, I start to think about all the unused drive space on their client machines. They have 5-10 GB apiece I am always telling them not to use because it's not secure and doesn't get backed up.

          In the "wouldn't it be great" department, it would be nice if I could organize their unused space and processor cycles into a kind of "storage hive" which would appear to them (and my backup scripts) as a network file system. We have the LAN bandwidth to spare, but I am not sure how much cost would be associated with leaving all the workstations on all the time.

          Then there are the problems associated with the reliability of a certain operating system they are all running. It seems to self destruct if you don't reboot it fairly regularly.

          I think this differs from a SAN (storage area network) in some subtle but important ways, and it's as close as I can get to a "killer P2P app" for business.

          Essentially, ditch the server and distribute its tasks among the under utilized clients. The whole server is replaced with a cluster of processes running on client machines.

          Sounds great to me, but I don't think it can be built with the pieces we have today. I am not about to bet the bank on a swarm of Win98 boxen...

          As FreeNet progresses, I find myself wondering how hard it would be to write a client for it that would present a CIFS (Samba, SMB) interface to applications.
  • by iforgotmyfirstlogon ( 468382 ) on Thursday August 23, 2001 @12:15PM (#2208548) Homepage
    I had a friend that got a promotion because he got to be good friends with the guy that owned the boat next to his in the marina.

    Would that be considered "Pier to Pier Networking"?

    - Freed

  • There was some discussion of "web services" on Perl Advocacy list recently. I come down on the side that says Apache is all you're ever going to need. Look, Slashdot is technically an application... does that make Slashcode [slashcode.org] an n-tier application server? Purlease. It's just marketing - web servers are SOOOOO 90s, now we call 'em "web services".
    • by cshirky ( 9913 ) on Thursday August 23, 2001 @12:50PM (#2208754) Homepage
      This is a really important debate right now, and there are no good answers. The debate comes down to how much do we need to do to the Web as we have it today to be able to create an environment where programs can be as interoperable as web browsers and servers are today?

      There are growing criticisms of the consensus vision of web services -- http / SOAP / WSDL / UDDI -- largely on the grounds that its complexity is un-web-like, and that there are uninvented and possibly uninventable layers required above UDDI for any two arbitrary applications to be able to find each other in the dark.

      Dave Winer of Userland [userland.com], inventor of XML-RPC and co-designer of the SOAP spec, advocates an embrace of these two protocols by the Open Source movement as a lightweight way to advance the battle for interoperability. (Dave's ideas in many ways answer the Will Open Source Lose the Battle for the Web? [earthweb.com] article form earlier this month.)

      Another group, in line with your "Apache is all we need" idea, has taken Roy Fielding's idea of the REST (REpresentational State Transfer) architecture as a way to extend existing web semantics furhter into the domain of applications. They have started a RESTWiki [conveyor.com] to expand on those ideas.

      This is all a big mess right now, with no obvious clarity coming any time soon, but two things we can be certain of are that experiments with application-to-application traffic is going to increase dramatically in the next 12 months, whatever the framework, and that with MSFT driving this idea as part of .NET, even if a lot of it is hype, it will affect our world a great deal.

      -clay

      • But... forgive me if I appear dense... why should we want web sites to be interoperable? I guess this is the reason I'm so baffled by this 'web services' thing: I just don't understand what they're needed for. OK, Microsoft would like Expedia to be able to share user data with, say, Hotmail - perhaps so they can implement a single sign-on, perhaps so that they can make sure they rotate the banner ads you see. Whatever. But why should the sites I use (to take a random example) need to interoperate? Slashdot, NASA Space Science, The Register, BBC News, Security Focus. Hmmmm. Single sign-on isn't going to save me much time there. Is this some sort of eCommerce thing? And if so, why should competitors want to share customer data?

        Sorry, I just don't see it. I think it's marchitecture designed by people who took Wired magazine a little too seriously. "Experiments with application-to-application traffic"? Do me a favour! It smells like a dead duck, it looks like a dead duck, and frankly it quacks like one, too.

        Time will tell - personally, I reckon this is the first 'VRML' of the 21st Century.

        • But... forgive me if I appear dense... why should we want web sites to be interoperable?

          This reply is a little late but this question seemed so strange to me that I couldn't help but answer it.

          First, you discredit single sign-on but if you are a typical end-user with a dozen subscriptions to a dozen sites, single sign-on is a big deal and a massive improvement over sticky notes on your monitor.

          Second, if you are selling something, wouldn't you want your product specifications (and in some cases your price) disseminated as widely as possible? If you sell digital cameras, you would absolutely love it if digital camera sites can easily incorporate your product specifications into their site.

          People already do this stuff. How do you think the non-Slashdot Slashboxes work? They are XML-based web services (not SOAP, but web services nevertheless). The hype around web services is just about standardizing the information sharing that organizations already do. One you standarize it, you build network effects and economies of scale.

          Is this some sort of eCommerce thing? And if so, why should competitors want to share customer data?

          Competitors don't. Partners do. Anyhow, you seem a little obsessed with single sign-on and user data. User data is just one kind of data worth sharing. There is also product data. Realtime product availability information.

          • It's the discussion that refuses to die ;)

            I'll address your points in order.

            Firstly, I /am/ a fairly typical end-user, and I do what 95% of end-users do: I have a small number of username/passwords which I have been recycling for the past four or five years. Obviously you don't want to do this for anything super-sensitive, but who in their right minds entrusts super-sensitive personal data to a third party?

            Secondly: if I'm selling something on the web, then my specifications and pricing data are /already/ as widely distributed as possible (routing glitches permitting.) That's what the first two "W"s in "WWW" stand for! I definitely wouldn't want third parties snarfing product data from my site and selling on to consumers: I'd be losing all that interesting marketing and demographic data. If "DigitalCameraNews.com" o whatever wants to link to my site, fine, I'm delighted.

            Slashboxes? Does anyone actually use them? I don't. Perhaps I'm not that typical an end-user after all... my preferred sources of news are not a subset the given slashboxes: slashboxes are all very kewl, but they don't really scale very well.

            Realtime product availability data comes from the vendors own stoick control systems. You really think Argos (a typical UK consumer goods reseller) are going to entrust their stock control to manufacturers? I just don't think it's going to happen on any significant scale.

            Returning to my VRML example: were you around when VRML was being hyped as the future of the Net? Just because something is technically possible, doesn't mean it'll work in the real world. (See WAP for another example: if you want to make pots of money, short stocks of telcos who spent a fortune on 3G licenses now, cos they'll be bankrupt or taken over within five years. )

            Lots of time and money was put into demonstrating that the whole idea just wasn't going to fly. I'm open to the remote possibility that I'm wrong about this ;) but I'm not holding my breath. I guess only time will tell.

            • There are good reasons for organizations to share structured data. My company does it. Slashdot does it (slashboxes). Yahoo does it (where do they get stock quotes and maps?). Google does it (ODP RDF dumps drive the Google Web directory). General Motors does it (EDI). That's what a purchase order is. Do you think that we'll still use primarily paper purchase orders in ten years?

              The only question is whether we all hand-roll our own solutions or have standards for doing it. XML is the structured data specification and web services are about how to move the information from place to place.

              VRML was totally different because it was about building customer demand for a new product (3D). Sharing structured information is not new. Most web-connected organizations do it in one way or another today. Just as the Internet glued together the disparate proprietary networks, the web services standards try to glue together disparate information sources.

              If you want to argue against the necessity for particular standards, we could have an interesting discussion. But you seem to want to deny that organizizations need to share information in real time. Or that we need standards to lower the cost of doing that.

              • OK, some good points. I'm certainly not arguing that enterprises have no need to exchange structured data, or that XML isn't a Good Thing that is being widely used already and will grow. As you say, in the real world, [ very expensive but highly reliable and available] systems such as EDI, or the UK BACS cheque clearing system (interbank funds reconciliations) etc are used. And, at the other end of the spectrum, there are Slashboxes and probably 'some' similar uses.

                This certainly does /not/ mean that a significant number of /web sites/ need to share data. Commercial partners: yes. Web sites? nah. EDI is expensive partly because it doesn't run over the internet - let alone over HTTP. There are very good reasons for this...

                VRML wasn't purely marketing driven. Everyone who jumped aboard the bandwagon had an angle on how it was going to be great, including many intelligent dedicated code hackers and highly technical people. In the same way, I think the 'web services' meme clicks with a lot of developers who've had to reinvent the wheel over and over again. I fail to see how this can drive it's use, though; companies stay prosper or file chapter 11 on other criteria than whether the developers have a fun time building the systems.

                Perhaps I have misunderstood what is meant by "web services". A backend database replication or stock updating system is nothing to do with the web except that the data /may/ end up used on a site. But that's nothing new: how does Slashdot (or ZDNet, for that matter) get their content? How does Amazon get it's content?

                • Okay now we are converging.

                  "Web Services" don't necessarily have anything to do with "Web Sites" except insofar as most modern development is exposed through a website in one way or another.

                  Web services are called Web Services because they use Web standards like HTTP and XML. Plus it just gets the VCs excited to prefix anything with the word "Web" (or it did a couple of years ago when the term was coined). Essentially they are distributed computing protocols built on top of existing Internet protocols.

                  Even so, some web SITES do want to share structured information. Check out meerkat.oriellynet.com. Slashdot probably gets its slashbox content using a system similar to meerkat. Google gets XML content from the Open Directory Project.

                  Also, one point that Clay Shirky made is that there are some radicals who think that HTTP is the only protocol you need to do structured information interchange -- whether you are doing "Web site" stuff or not, they claim you should expose your structured data through HTTP(s). (even if you only give the password to your partners)

                  EDI is expensive partly because it doesn't run over the internet - let alone over HTTP. There are very good reasons for this...

                  Many people disagree with you. Most EDI people are running towards the Internet as quickly as possible. That's the whole point of ebXML. Maybe value added networks will survive for security and quality of service but non-IP protocols are dead.

  • This is probably slightly offtopic, but I wanted to gauge peoples reactions, and this seems like a good place to do it...

    Search engines are having a harder and harder time spidering the web as time goes on. There are more and more sites to spider and more and more indexed sites to update. This seems like a perfect application for a peer to peer application - kinda like gnutella except only the indexing information would be held by the nodes rather than the data itself... There are lots of idle machines out there with idle bandwidth - we could create a peer to peer solution where each node spiders a minute part of the web. When it's done, it could send the info to it's immediate peers so that a node being down doesn't hide the index. A search could be done using a similar technique to gnutella but the advantage would be that the system could index all the information that is currently out there rather than the items stored in the application. Does anyone know if anything like this has been done, or is even feasible?
  • Hmm,

    My understanding is that a 'server' is a machine that offers 'services'. So if you connect to a machine that is offering a service it is - by definition - a server.

    If I'm right then we'll all have to agree that P2P is an empty internet buzzword that has no meaning.

    Even with Napster it wasn't really P2P. You were getting MP3's from a machine 'serving' them. That makes it a server.

    Maybe P2P should be renamed to NEN, or 'normal everyday networking'.

    Claric

    • You were getting [files] from a machine 'serving' them. That makes it a server.

      Why can't a server also act as a client? We need to get away from the idea that a "server" must always have a permanent, high-throughput connection with three or more nines reliability at the edge of the network.

      Even with Napster it wasn't really P2P. You were getting [files] from a machine 'serving' them. That makes it a server.

      But that machine was also 'getting' files from other servers. Read the article. In a client-server system, the client and server have relatively fixed roles, whereas in a distributed or "peer-to-peer" system, the clients and servers exchange roles frequently or even concurrently, preferably even when the agents are on transient network connections. For example, I may be downloading the latest Linux kernel from user foo while serving the latest XFree86 release to user bar.

      • This is what I'm saying ! All it is that makes a server a server is it running services. Think of a NIS domain. You have the NIS master. You have a home server and a mail server. These are both clients to the NIS master. You log in on this NIS master. The nis master is a client to the home and mail servers. ANYTHING can be a client or a server. I did read the article. It was a waste of time.
        • All it is that makes a server a server is it running services.

          OK. I guess I was bitching to ISPs who think "server" is "something you pay $$$$$$ per month to run" and not something that the average consumer can run.

  • The seeming evolution of client-server realtionships (and definitions) stems from our increasing ability to place more and more advanced technology into the basic PC. In a very basic and abbreviated form, a server transmits information (typically content, not simply SYN/ACK) to a client, which receives it. Modern PCs have the ability to host information, and are typically capable of fulfilling both roles simultaneously. Hence the machine-based idea of the client-server model has changed to a typically event-based concept. Your PC can act as a client when it displays the lastest Slashdot postings, but also as a server when someone decides to download pornography from your gnutella app. It it this very ability that makes P2P possible, each machine exchanging data according to the specifications of the user.

    Furthermore, "Peer-to-Peer" infers an equality relationship, which in turn denies the client-server heirarchical model. Hence, the necessity to revisit terms that no longer fit the previously typical standard.
  • What I see in this article, the constant references to web services learning from P2P and vice-versa, is already realized in SOAP. We've just got to harness it.
  • The P2P seems to be the way to de/centralize the storage of information and allow a individual "passport" to allow authentication and services. This P2P model could be the basis of MONO.



    Let's hope that's the way the industry goes.



    It's actually very exciting, if your ISP doesn't start charging you a P2P account, like they charge for a Presence on Web account

  • I bet some people would pay alot for a gnutella (for example) anonymizer/redirect cache service.

    example:
    [customer machine]
    (connects too)
    [SUPER NODE:file location cache,server cache,new anonymous name]
    (connects too)
    [gnutella network cloud.]

    Slogan: Garrunteed anonimity barring court order, instant connects, instant searches.

    Hey anonimity might work for ISP's too, but no infrastructure would be neccesary for the big cache service, just bandwidth.

    PS:cmdrtaco "Lameness filter encountered. Post aborted!" was trigered by an ASCII network diagram.
  • After reading the article, it seems that the reason this isn't "catching on" is that there's really nothing to it. There seems to be this vague idea that peer-to-peer is some holy grail of networking, and that any day now it will lead us away from the "limitations" of the client-server model.

    Unfortunately, no one seems to be able to come up with many concrete applications of this idea (other than a few mentioned, like music sharing). The reason is that most of the functions we expect from a network are based on client-server technology. There seems to be this hazy notion of "conversations" between machines, in the same way that two friends could have a chat, and this is not the kind of fluffy computing model that I'm looking for when I visit my bank online to pay my bills.

    I definitely agree that the HTTP protocol is ill-equipped to deal with the demands of today's network applications, but the solution is to standardize a better protocol, not throw out the entire model.

    -- Brett

  • You know, five years ago "the Web" meant HTML documents moderated or mediated by HTTP. "The Web" now means the publicly accessible Internet, so I think we're not only going to keep the word "Web services," I think inasmuch as it becomes popular, it's going to change the meaning of the word "Web" away from narrower protocol-driven definition and toward the larger public Internet definition.

    You know, there's a perfectly good word for "the publicly accessible Internet": the Net. The Web is a bunch of html pages hyperlinked together. The Net is that plus everything else. It's nice to be able to express oneself with precision, and if "the Web" becomes a blanket phrase for the entire net, we'll lose a little.
  • So far it seem to me that everyone except "Trollman 5000", the only mention of RIAA and MPAA in this whole topic, is missing the point.

    There are major economic forces, forces so big that they are (hopefully temporarily) superceding the US Constitution, that are effectively trying to turn the Internet into a big broadcast medium. Essentially, to a media mogul used to TV and Radio, every electronic distribution means ought to look like TV and Radio. (Kinda like the old hammer/nail thing)

    Centralized focus means ease of control. It means you can easily go after an ISP for content posted on their servers. The lawyers can wield a big OFF button.

    Peer-to-peer is much more difficult to police, though it sounds as if they're trying against Gnutella.

    But then realize just WHO runs the cable ISPs, and then take a look at their TOS, and it's immediately obvious WHY. Aside from not having adequate amounts of the correct competence to run a data network, they know that personal servers and peer-to-peer are more difficult to control. Therefore, "No servers for the use of others" is the most common rule on Cable. Note that DSL is generally more open, and that fits with the parent organization being a non-content-owner.

    But as a cable subscriber with no hope of DSL, peer-to-peer is beyond my reach.

    So...

    We need a peer-to-peer proxy, for two reasons.

    First, it lets me connect out of cable, and once connected to the proxy, it lets me act as a peer. If the cable companies got a little more enlightened, they might even let run the proxy themselves. (Yeah, right! Who wants to wait?)

    Second, as Code Red has shown, with default Microsoft security and Joe Sixpak running his home PC, the Internet simply isn't a safe place. For the most part, perhaps ISPs should allow NO incoming connections, by default*. A peer-to-peer proxy would be the only thing keeping the concept generally viable, in that case.

    (*) By the same token, they should allow the knowledgable user to open ports. (Again, fat chance!)
  • There's a big push in the telco industry to get DSL "under control". RedBack Networks [redback.com] is providing much of the technology for this. Their white papers [redback.com] are scary.

    RedBack has a model for DSL that the telcos love.

    • "You click on an icon for the service or application you require, fill in some basic subscription and payment information and gain access to the service you need. When you have finished using the service, you "hang up".
    And they mean it. They want users to open separate PPPoE connections for separate, and separately priced, services. Their video conferencing example uses $0.35/minute as a suggested price. Users will no longer connect to "the Internet", they will connect to specific, paid services, billed through the telco via the PPPoE connection mechanism.
    • The SMS allows Service Providers to "resell" the same link by offering subscribers dynamic access to multiple services, thus generating more revenue per subscriber.
    That's pitched to telcos. The whole RedBack push is "more revenue per subscriber line". Quality of service can be different for each PPPoE connection. Thus, the "quality of service" mechanism can be used to throttle the non-premium services down to low data rates. This encourags use of premium services, and slows down "unauthorized" distribution of content.

    The telcos have tried this business model before, with X.25 (an overpriced flop), Minitel (an overpriced flop in the US), ISDN (an overpriced flop in the US), and 900 numbers (an overpriced success, but only for porn.) Here we go again.

    None of this would go anywhere except that in the US, DSL is becoming an unregulated monopoly. This gives monopoly telcos the power to force this on their customers.

  • Shirky is obviously a smart cookie, and he's obviously been thinking about peer-to-peer technologies for a long time. Just this relatively short glimpse into his ideas is very though-provoking.

    However, it contained the dumbest phrase I have read in weeks, clearly proof that the smartest people make silly mistakes from time to time:


    Not only do I think that's inevitable, I think that any energy spent attempting to avoid that is probably pointless.


    Well, if it's inevitable, then by definition trying to avoid it is pointless (not just "probably pointless", pointless).

    Sorry, I couldn't resist pointing this out.
    --Q
  • Jeepers, we were going to have a shiny happy remoted interconnected interoperating world with CORBA. And before that it was RPC. Now we're supposed to get it with UDDI and SOAP and so forth. Why? What's changed that people are going to completely throw out the window the idea of orthogonal client and infrastructures.

    Let's look at CPAN for a second. Here's how you run a CPAN site: cd to a public ftp directory and wget --recursive ftp://some-cpan-url. The smart client figures out the rest, and people can use dumb ftp clients too.

    Here's how slashdot disseminates its feeds in XML and RDF and HTML: you grab it from a URL, and the webserver shovels it at you, blithely ignorant of the semantic meaning of the bits it's transferring around.

    In the magical world of webservices, you now get to write special methods on the server end, configure the server to invoke them, and in general ensure that you don't interoperate with anything. Oh yes, you also get to classify the whole system with some big bureaucratic UDDI schema that is supposed to describe it all to any capable client, as if you didn't already write the client to work with this domain-specific protocol already.

    All this might be great for intranet apps ... I just don't see it serving its purported purpose of generic information interchange.
    • Here's how slashdot disseminates its feeds in XML and RDF and HTML: you grab it from a URL, and the webserver shovels it at you, blithely ignorant of the semantic meaning of the bits it's transferring around.

      Welcome to web services.

      There's a lot of possible layers there. Yes you can use SOAP and WSDL if you want to, but at it's simplest layer you're providing a service of information (rather than HTML) over the internet (doesn't have to be over HTTP). Slashdot is providing a simple web service in its RSS feed. I think you'll see a lot more people coming to the conclusion that they don't need SOAP and WSDL. Straight XML served over HTTP perhaps with parameters in the querystring is an excellent way to deliver web services.

      I have a paper on why this is an excellent way to do things that I gave at the open source conference, but my web server is offline right now and won't be back up for about 4 weeks (due to the wonderfulness of British Telecom). When it's back, you'll be able to find it at http://axkit.org/docs/presentations/tpc2001/

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...