Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
News

Stanford P2P Group Releases Software and Analysis 54

Bert690 writes "Apropos of yesterday's Slashdot story on BitTorrent: Some folks at Stanford have released a paper on P2P "bucket brigade"-like streaming that contains *an actual analysis* and a downloadable implementation." Could this be considered actual research on the subject of p2p networks and scalability?
This discussion has been archived. No new comments can be posted.

Stanford P2P Group Releases Software and Analysis

Comments Filter:
  • postscript (Score:4, Informative)

    by Account 10 ( 565119 ) on Saturday March 23, 2002 @01:42PM (#3213380)
    Read it here [samurajdata.se]
  • by tigycho ( 95101 ) <david.serhienkoNO@SPAMgmail.com> on Saturday March 23, 2002 @01:42PM (#3213383)
    You can get several versions of this from this [stanford.edu] page, including a pdf [stanford.edu] version or a plain text [stanford.edu] version.
  • Other P2P research (Score:4, Informative)

    by Jordan Graf ( 4898 ) on Saturday March 23, 2002 @01:54PM (#3213433)
    P2P research by well known research institutions is far from unheard of. MIT has Chord [mit.edu] which is a project to produce robust scalable distributed systems using peer to peer ideas. They have an efficient hash lookup algorithm that could form the basis of many p2p systems and they have code available for download.
    • The thing that I find refreshing about the work from this particular group at Stanford is much of it has immediate applications. It's tough to say that about Chord (MIT), Oceanstore (Berkeley), and several other academic P2P research projects.
  • P2P Research (Score:2, Informative)

    by Anonymous Coward
    There is lots of P2P Research going on.

    Check this out:
    http://www.cs.rice.edu/Conferences/IPTPS02/ [rice.edu]

    This happened this month at MIT.
    • Re:P2P Research (Score:2, Interesting)

      With all due respect, your statement There is lots of P2P Research going on is about as dumb as saying "There was a lot of relational database research in the 1970s".

      P2P is the latest big hit in the software/networking realm. Email's old-school these days, the Web's getting tiresome, but P2P is THE exciting new technology.
  • Why? (Score:2, Funny)

    I bet the only reason they did this research is they didn't have enough time between sutdying and partying to mess around with multiple p2p progs. They gotta get their fill of divx movies somehow!
  • by colmore ( 56499 ) on Saturday March 23, 2002 @02:24PM (#3213537) Journal
    Abstract
    The high bandwidth required by live streaming video greatly limits the number of clients that can be served by a source. In this work, we discuss and evaluate an architecture, called SpreadIt, for streaming live media over a network of clients, using the resources of the clients themselves. Using SpreadIt, we can distribute bandwidth requirements over the network. The key challenge is to allow an application level multicast tree to be easily maintained over a network of transient peers, while ensuring that quality of service does not degrade. We propose a basic peering infrastructure layer for streaming applications, which uses a redirect primitive to meet the challenge successfully. Through empirical and simulation studies, we show that SpreadIt provides a good quality of service, which degrades gracefully with increasing number of clients. Perhaps more significantly, existing applications can be made to work with SpreadIt, without any change to their code base.
    The paper is more about solving the problems with streaming multicasts. That is, it is prohibitively expensive for small time providers to stream to more than a few users.

    I work for an unlicensed college radio station [wbar.org], and since our broadcast radius is so small, we stream everything with RealAudio (not my choice) once we hit about 20 online listeners or so, things start crapping out.

    We're upgrading our server, but that won't change things dramatically. This paper suggests a way that high bandwidth listeners could relay the stream and reduce the server's load. It uses P2P software, but the focus is streaming.
    • Wouldn't this idea be somewhat like a redirector scheme where the clients would just take places of the traditional 'reflectors'. The problem is you need a client to get a whole new viewer - or have them install some sort of system service. This would then again make your servers/reflectors either donated or expensive machines scattered about.

      Currently there are some icecast and shoutcast servers which will simply provide you with a playlist of several servers. You start at the top and when capacity is full on that server you are `booted` to the next.

      One way could be to stream mp3's [or later video?] throught advanced shoutcast/icecast servers. I believe they both have the ability to log into another server and re-broadcast. Adding media player plug-ins [or whole p2p software if you want to piss us all off] can then link up to a server and help.

      Somehow the list is updated either by the master server (central=1 as defined in the settings?) or by the P2P network. Search a genre, song, album whatever and you can get onto that broadcast. Some users can re-broadcast.

      Actually as I think of it the idea is kind of similar to the 'leaf-node' idea proposed by LimeWire in their UltraPeers. I say the P2P software is somewhat web based and keeps databases of the servers. Basically - icecast/shoutcast could work for now but as you log into the show you are helping your corner of the world get it too.

      I just hope the end idea doesn't go around pinging their servers to death while trying to listen and broadcast.
    • SpreadIt

      *MrHat wipes tears from his eyes*

      Considering what "P2P" is primarily used for, they really couldn't have picked a better name.

    • I don't like that idea. If your station doesn't have enough bandwidth to serve its listeners, I don't see using my bandwidth to serve some of them as a legitimate solution.

      Same goes with the game companies who want to use this method to save bandwidth costs on game demo downloads. I see no reason that they should be able to use my bandwidth to serve their content to other people.
  • Some questions... (Score:3, Insightful)

    by neonstz ( 79215 ) on Saturday March 23, 2002 @03:37PM (#3213785) Homepage

    After reading the paper I have a few questions. (Yeah, I know this is just research).

    If this is implemented in the real world, are each user supposed to use his/her outgoing bandwidth for this? What about people unfortunate enough to have a monthly limit?

    What if the same connection is used by more than one person/client? With 4 PeerCast nodes, or maybe just 1 PeerCast node on the same connection as a web-server, will PeerCast detect that there suddenly is a lot less bandwith available than just two minutes ago (maybe because of slashdotting :).

    • Do some ISP's targetted at end users really have outbound bandwidth quotas? Never heard of any myself, but then what do I know?!

      Any "real world" implementation would undoubtedly exploit the concept of a super-peer, that is, allow only relatively powerful systems with decent bandwidth to serve as non-leaf nodes. Firewalled or NAT'ed nodes can't be anything but leaves in this system without some protocol extensions.

      The system as described seems perfectly adequate for deploying on a LAN, but as you suspect would need some tweaking for a heterogeneous network. These tweaks would be nothing beyond what systems like Morpheus and next-generation Gnutella nets are already doing though.
  • I suppose that would depend on your definition of "Actual Research". It sure seems like it to me.
  • more p2p research (Score:1, Informative)

    by Anonymous Coward
    Another p2p project at berkeley: http://www.cs.berkeley.edu/~ravenben/tapestry/

    I agree with the others that p2p research is nothing new, although it's just starting to get "hot". In fact, there are several conferences devoted to p2p research papers, e.g.:
    http://www.ida.liu.se/conferences/p2p/p2p20 01/
  • by Anonymous Coward
    Academic institutions have done plenty of P2P research. The paper titled Overcast certainly resembles this one, but it's a year old. As mentioned above, Chord was a great paper. Pastry, done in cooperation with Microsoft Research, falls into the same category. Peer to Peer research is highly active at universities; I should know, as I'm doing some myself right now at Duke.
  • by bramcohen ( 567675 ) on Saturday March 23, 2002 @08:33PM (#3214643)
    Any tree-based distribution mechanism has no way of utilizing the upload capacity of it's leaves, resulting in a huge amount of wasted capacity.

    The reason to have a tree structure rather than a mesh structure is, quite simply, that a mesh structure is a lot harder to implement.

    BitTorrent [bitconjurer.org], which I'm the author of, does a mesh properly. It also has real-world deployment experience - it held up against slashdotting quite well. Thanks go out to everyone who's downloaded using it.

    I'm a bit skeptical of their claims of robustness and QoS. I have real experience with the way real machines behave on the net, and trying to get real-time streaming working before you've even got file transfer going seems like putting the cart way before the horse.

    There's also the issue of interrupts when peers higher up in the tree drop out or become slow, and then there's leeching problems...

    As for doing simulations, I'd love to have a way of doing simulations which was at all useful, but my experience has been that real-world net churn and congestion behavior is just so funky that back-of-the-envelope calculations are as good as you're gonna get.

    • by Anonymous Coward
      It isn't surprising that robustness and QoS may be easier to achieve for real-time streaming networks than for file transfer; real-time streaming imposes more constraints on the client than does "plain old" file transfer.

      In the real-time streaming scenario, clients are restricted to accessing the content synchronously (or close enough). All clients access the same content at the same time and have no say in what content comes out of the source. Thus, real-time streaming networks have a large pool of 'volunteer' routers to draw from when a disruption occurs.

      In the file transfer scenario, clients access content asynchronously. The network is less homogenous, so file transfer networks have more difficulty reconfiguring themselves when a node leaves a network. Extra flexibility in the client translates into more complex, less reliable networks.
      • real-time streaming imposes more constraints on the client than does "plain old" file transfer.
        There's a tradeoff between demands made on the client and demands made on the server. Making life easier for the client is a bad thing - it causes more problems on the server, which is where all the difficulties were to begin with.

If I want your opinion, I'll ask you to fill out the necessary form.

Working...