Forgot your password?
typodupeerror
News

Peer-to-Peer Overview 53

Posted by michael
from the state-of-the-art dept.
An anonymous submitter sent in: "New Scientist has an interesting feature on peer to peer systems, taking a less copyright orientated approach, and going into some technical detail about how the various P2P systems work and compare to each other."
This discussion has been archived. No new comments can be posted.

Peer-to-Peer Overview

Comments Filter:
  • I guess since NO-ONE (so far)here has seen the artical we can now tell who is talking out of their ass when they reply to it.

    _ _ _
    I was working on a flat tax proposal and I accidentally proved there's no god.

  • You mean there's a class of morons dumber than ircers?
  • And I mean no, I will not have fries with that.
  • by cymen (8178)
    Nope... They must have one weak ass jsp-based server...
  • You are one sick puppy. Seek help, please.
  • i guess it had to come to this... the nap was the coolist prog since dos... everyone is on it.. everyone shares files. but, apparently we have been stealing from the companies that charge 14.99 for a cd that costs them about 50 cents to package and market. Napster is going down for all the wrong reasons.
  • by Sanity (1431) on Sunday March 11, 2001 @12:01AM (#371536) Homepage Journal
    You seem to fall into the common trap of thinking that the different P2P architectures are just different approaches to doing the same thing. This isn't the case, there is actually very little in common between the various architectures as they generally have very different goals. For example, Napster and Gnutella are both designed to let people share their mp3s with other people, Freenet is designed to provide a secure forum for free speech, Seti@home is designed to combine people's spare cycles to find aliens etc. These systems are as different as chalk and cheese, just because many journalists think they fall under the P2P buzzword, doesn't mean that they have any more in common than any other software, nor that there is any more room for interoperability than there is with any other software that communicates via the Internet.

    The claim that P2P would be great if only the systems would interoperate really doesn't bear much scrutiny, TCP/IP is often the full extent of what these systems have in common. This isn't a flaw, it is a simple fact.

    --

  • gee, thanks, now its -5. lol.

    "just connect this to..."
    BZZT.

  • And we're free to talk about how idiotic statements like the one you just made are.

    LK
  • I'm just making a wild guess here but the admin must have seen the /. article and pulled the cord or something.
    -You'll never get me evil slashdotters, muawahahahahahahaha!
  • by koshi (98864) on Sunday March 11, 2001 @02:16AM (#371540)
    As an actually subscriber to newscientist I have read the artical (in print).

    It starts with a breirf history of P2P systems (focusing on the modern music sharing ones).
    It then goes on to talk about Gnutellas technological downfalls. The main part of the artical is taken up with the method that by which freenet works

    Its a good articale that talks quite a bit about the technologys and only breifly touches on copyright issues.

  • P2P Sucks... I don't even need to go there and explain it ... if you don't understand it you're stupid ... Go buy a hub.
  • This is becoming quite amusing indeed. Whenever a story gets posted to /. the linked site seem to get hit so heavily that its down half the time :) Hmm now if I could only find a few interesting tidbits on some hated companies that we all love to hate :)
  • Linux: Goddam, if I only had a Napster client on my Linux box.

    I dont know if you were serious or not, but there are tons of Napster Clients on Linux. I will list a few...

    http://sourceforge.net/projects/gnome-napster/
    http://sourceforge.net/projects/knapster/
    http://sourceforge.net/projects/nap/

    Lord Arathres
  • Freenet is anonymous, it's archetecture automaticly does proxying and multiple layers at that. While the first node you connect to can figure out who you are they can't easilly determine if you initiated the request and any nodes down the further down the chain don't have a clue about who you are. Unlike Gnutella when a Freenet request is fufilled the requesting node doesn't directly connect to the node that has the data.

    Secondly by creating many copies of popular data you are decreasing the amount of bandwidth used in exchange for increasing the amount of space used. Say you have a popular site in Europe that's getting lots of hits from the US. Without Freenet every time someone in the US wants the data their request has to go over congested international links, very wastefull that. With Freenet a copy of the data, probably many copies in fact, would soon get stored in the US reducing the load on those slow international links. Even though the data has to go through many nodes in between, and has multiple copies of it made, you still get a significant savings in bandwidth usage. And any copies made while en-route are likely to be used again by other people because those copies are by-nature "close" to the keyspace that node serves.

    Ever heard of Akamia [akamai.com]? They do essentially the same thing with their network to reduce the load on sites around the net.

    Now currently Freenet doesn't do a very good job of this because it doesn't take network speeds into account when choosing nodes while routing messages. But it will be quite simple to add support for that and will be done after a few more releases.

  • by JohnsonWax (195390) on Sunday March 11, 2001 @01:57AM (#371545)
    Actually, it really isn't too simplistic.

    Dave Winer is closing in on this with his just released Radio Userland [userland.com] product. It's a fairly generic product that uses XML, XML-RPC, SOAP, etc. to exchange information between clients and client-server. It doesn't much matter whether it's headlines, mp3 files, quicktime movies, news stories, html templates, whatever. It has the plug-in archtecture and a built-in database, scripting language, web client and server. I don't see any reason why it couldn't also work as a CPU sharing mechanism as with Seti@home. I could probably write such a plug-in for his product in a day.
  • by Sanity (1431)
    The thing that irks me, and the thing you have failed to address, is the expectation that all P2P systems should interoperate. Of course, it is good when people standardize on open protocols such as XML and XML-RPC, but using XML doesn't suddenly make your software interoperable, and sometimes it isn't possible. For example, the communication protocol used in Freenet has very specific requirements in terms of crypto, forward deniability, and security, which would mean that we anyone who wanted to talk to Freenet must speak Freenet's language.

    Interoperability is a good goal in general terms, but there is nothing about P2P systems which makes them more in-need of interoperability than any other software. In many cases, TCP/IP is all the interoperability they need.

    --

  • That's why napster's so great - it derives its benefit from the fact that practically everyone is hooked in and their files are entrally indexed.

    Which is the whole legal issue -- there is a knowledge of what files are available at any given time, and stored on a central server.

    This is why Gnutella (the network itself, not the outdated client) is much harder to monitor; there does not exist a central server with an index.

    If you want to find more, set your TTL to a nice (not absurd) amount. Just hope all the links are >= T1. :)
    Thus sprach DrQu+xum.
  • You really don't seem to understand how any of the P2P systems work, particularly Freenet. I suggest you read this paper [freenetproject.org] to get better informed.

    --

  • It seems everyone thinks peer to peer is great for downloading copyrighted information without consent - just by sharing files. But I think peer to peer has its greatest value in applications where it refines information, and lets people cooperate in new ways. Groove is an excellent example of this. I think Groove will be the next killer app. If you want to use it to share MP3:s among friends, great, you can do that. But you can also use it to keep track of some project your working on with colleagues that are on different places. Or you can use it to play a nice game of chess with a distant friend . Or... you just have to use your imagination. I am sure there are great ideas not yet found on how you can use peer to peer.

    My company is developing a product that optimizes the supply chain flow between customers and suppliers by exchanging information about stock levels etc. Then it uses this information to calculate when the best time to refill the stock is. This product is peer to peer, and I think it has great future. It also takes advantage of the network effect in that the more people that uses it, the greater the advantages is. For suppliers, it gives a much better picture on how the market demand changes. For customers, it relievs them of the hazzle to order products all the time. That is now the supplier responsibility.

    P2P has great future. But do not only think Napster when you think P2P. Think new!

  • rather than using the napster protocol (which is very limited) i've developed a different one so all files can be accomodated, and with more advanced searching capabilities.

    It uses the distributed server arrangement, rather than sharing indexes around (big bandwidth clog) it instead passes search requests around. It'll be up on sourceforge soon. Stay tuned.

  • Napster will likely be remembered not so much for enabling music piracy as for starting a revolution that changed the way the Internet worked.

    Napster is the pioneer of a technology known as peer-to-peer networking, or P2P for short. The core idea of P2P is to allow individual computers to communicate directly over the Internet.

    Er? Am I missing something here? Isn't "The Internet" already a peer-to-peer network???? Hasn't it always been????


    --

  • There's another important reason: if a node can go down at any time, then it doesn't matter if I wish to run a server on my desktop. Gnutella lets me share files (many of them legal), while only remaining connected for a few hours or days at a time. If I go down, then someone else (hopefully) has the data also, so it can still propogate. P2P in its current invocations doesn't allow nicely published pages, because the clients are intended to download a single file, not a collection thereof. There are cases where I would like to be able to share my (legal) music and videos that other people want (and they do), without having to find a way to provide a server that is up 99%+ and has sufficient bandwidth for all requests. P2P provides an answer, provided other people provide mirrors (something I try to do). Also, the mirrors are so easy to create -- just download the file and *poof* there it is.
  • I'm quite aware of how ALL the P2P systems work, especially freenet. I'd love to see any particular criticisms you have beyond an outright dismissal of my arguments because I "don't understand." My statements about Freenet are accurate and true; if I request a document from Freenet, unless a node one hop away has it, there are going to be multiple proxy copies made to service it. To quote from the document you link to, "The data is returned from d via e and b back to a, which sends it back to the user." That's essentially a full download to e, a full download to b, and a full download to a. If any one of those guys is on a 56k modem, it's gonna suck.

    To reiterate - P2P, by definition, requires more complexity, bandwidth and expense than a simple centralized system. My argument is that, like crypto, the extra hassle and complexity will mean that people will only use P2P when it is more effective than cheaper and simpler alternatives. The only times that will be the case is if you are trying to avoid detection as the distributor of the data.

    As an aside, I very strongly doubt that Freenet would survive a concentrated DMCA attack. Each document has a uniquely identifiable string. If I'm the RIAA, I log onto Freenet with a hacked client. I do a search. The server one hop away says "Here ya go." As the RIAA, I don't know if that server proxied it or is storing it. And it doesn't matter. I send out a DMCA notice to the guy who owns the server (I know is IP and the time he was on, so I can do this via is ISP, who will cooperate, because he doesn't care). The guy who owns the server now has "actual knowledge" that copyrighted material is moving through his system. He can either block it (have his Freenet server not respond to requests for that), or simply drop out of Freenet. If he does neither, the RIAA DMCA notices his ISP, which simply pulls the plug on him, because they don't care (and he's prolly violating their terms of service by running a server, anyhow).

    I think something like Freenet can only work if most of the documents on it are the kinds of things people will risk going to jail for. I don't see most people risking going to jail for free MP3s.

  • You didn't mean orientated, you meant oriented. GAH!
  • While we're waiting for the article to come back, take a look at this site about p2p [openp2p.com], this article with statistics on gnutella and links [monkey.org], and of course this list of p2p clients [webattack.com]. There might be some interesting things in this slashdot article [slashdot.org]. And yes I am a filthy karma whore [karmawhore.com], and no I have no shame [shame.org] whatsoever.
  • Probably the wrong assumption. As the site has been downed due to unknown circumstances (probably not slashdotting, though), I can only throw out wild guesses of my own. The article probably deals with the technical details of P2P without getting into the storm of controversy surrounding the uses of said technology. Of course, can't tell until the site gets back up.

    Although, I'm starting to wonder if sitehosts will soon start advertising that they can survive a slashdotting as part of their promotional spiel.

  • hey, we're all human beings you know. can't we all just get along. I try not to hate any minority groups in society, but I think I can make an exception where Nazi's are concerned. you people make me sick. if you want to talk this rubbish, do it else where. this is slashdot if you havn't noticed, and you're WAY off topic. just hijacking a random slashdot item to promote you racist views is so lame..... the name Anonymous Coward is very fitting for you.
  • Well I was being kinda cynical :)... The server was knocked over when there was only one reply posted (some troll) here.
  • by Anonymous Coward on Saturday March 10, 2001 @11:02PM (#371559)
    Seeing as absolutely nobody here has read the article (it seems to have experienced some sort of preemptive Slashdotting--must be a new IIS "feature"), I can only guess as to whether I'm on-topic here...

    How to make the successor to Napster

    Most people don't realize that what made Napster successful was not its network protocol or its technical sophistication. Ease of use made Napster, and ease of use will make or break every P2P system. For a P2P system to work and be useful, it has to be simple, easy to find, easy to install, easy to use, and easy to upgrade. Why upgrade? Because upgrades will be necessary to combat "blocking" technologies instituted by ISPs.

    The way to accomplish this is not to write yet another Gnutella or OpenNap or Freenet or Blocks client. There are enough of these today. The solution is not necessarily to create new protocols--too many protocols, and your average Joe won't know what to use or why. Furthermore, you open yourself to liability if you do so, as described in this excellent white paper from the EFF [eff.org].

    The solution is to write a generic P2P interface. Make it extensible via protocol plugins. Users should be able to type in one search query, and have it search OpenNap, Gnutella, and Freenet simultaneously, and display all the results in one window. Plugins should concern themselves only with the P2P protocol, and with comforming to some standard interface.

    Do not write both the plugins and the interface yourself. Obviously, a GUI that does nothing by itself is not an infringing device. Better to compartmentalize things and make it harder for people to be hit with absurd lawsuits. Furthermore, the programmers who tend to be good at network protocols are generally not so hot at GUI design, and vice versa.

    Try to keep the plugin architecture very cross-platform. Ideally, plugins would be totally portable code requiring only a recompile (even when going from Windows to MacOS, for instance), or--better yet--would just be a bunch of specs in XML format.

    Set up a mechanism that allows updates to be distributed via the P2P network. Obviously, some sort of signing/verification will be required to reduce the risk of trojans. Still, the goal should be to make it almost transparent for novice users to add support for new protocols. If it becomes easy to implement, distribute, and adopt a new protocol every month, it will be impossible to stop P2P. Updating the GUI is less important--it's the protocols that have to change.

    Conclusion: What this will do is create a highly adaptable, fault-tolerant, lawsuit-resistant network that is easy to use. Why have I placed so much emphasis on ease of use? Without a large userbase, you're nothing. A P2P network is only as good as the libraries of its users. Why is this network so adaptable? Suppose one or two protocols get shut down or blocked or whatever. Someone creates a new protocol, writes a plugin, distributes it by one of the remaining channels, and we're back in business. Finally, the system also encourages modularity and code reuse, which is A Good Thing.

    By the way, if you can't figure out why I'm anonymous, you haven't been paying attention [eff.org].

    19021312

  • That's not helpful when you want to find a not-so-common file and therefore want to search a very large number of users to have a good chance of finding it.

    That's why napster's so great - it derives its benefit from the fact that practically everyone is hooked in and their files are centrally indexed.

    Sure, YOU can always go somewhere else, but if nobody else uses the same PTP as you, it is useless.
  • Try to keep the plugin architecture very cross-platform

    I'd suggest not using XML specs, but including some interpreter for byte compiled code - such as a small Python interpreter. This way all the logic is inside the plugin, and the application using taht plugin can be dumb, it doesn't have to read through a defined spec, build an engine to understand it and then talk that protocol.

    It would probably be more of a bear to do this and write the XML plugin spec, than it would be to write a module for an interpreter.

    Bytecode compiled modules are platform independant, so the same code piece will run on Mac, Win32, UNIX, QNX, whatever... no need for a recompile.

    I suggest python over java since its already ported to more platforms and a minimal interpreter is probably going to be smaller than the Java interpreter - and Guido isn't going to beat you over the head with language spec conformance issues like Sun might.

  • Hey, it's your funeral. Maybe you forgot that we whites are the ones with the money, guns, and power?

    That is not what your inbred, trailer trash, nazi friends keep posting. They keep saying us Jews control everything. The banks, the media, Hollywood, everything. Answer me this, nazi boy. If we Jews own everything then how come I'm driving a 3 year old Honda instead of a brand new Porsche. Maybe I should go down to the Porsche dealer Monday, whip out my Jew id card, you know the one that the international Jewish cabal gave me, and demand my new Porsche.

  • Look it may be simplistic, but it's necessary. It seems to be a noble goal and even if it's as hard as a desktop enviorment to implement it's the only way to insure we are allowed to anonymously file share in the future.

    To simplify coding it he/she/it could have suggested...

    ...breaking the plugins down to client and server modules, but that makes it leagaly vulnerable

    ...breaking it down by type of service:ptp, central-database-based and single node shares, but that also makes it weak to attack legally.

    So the question really is, what do ptp and central-index, and node by node file sharing have in common.

    This should be really useful for downloading free software public domain type stuff too. If not it will be a short conversation when intent becomes a topic in the court.

    slanted soapbox, preachy rant type thing

    One could look at the litigation and destruction of napster as the dumbest thing corporate media ever did. All those users were going to one place, and may-not have-ever-known they could learn a new interface... but they will know now. When the RIAA goes after the next target making the next switch will be much easier for those millions of users. It might be as easier for techie types to adopt an ex-napster user who is wandering the streets like a lost dog, and show them 2 way nature of the internet. After all not ALL musicans are/want-to-be RIAA and for them this kinda sucks.

    Ex-napster users have been unAOLified. awoken from their clicky-clicky bad dream. I haven't met anyone who is a born again AOLer, have you?

    End rant type thing

  • See Word for the Wise [m-w.com]:
    First attested in the mid-1800s, orientate is simply a longer way to say orient.
  • Site is still down here. Given the header of the story, however, I believe this is still on topic. I would like to comment on how once again we're hearing about the magical qualities of "P2P" software. Oh what a lovely new buzzword for journalists to exploit and for us to put on our bingo boards.

    What a remarkable new development this is, we must not have had this sort of technology since some 18 year old came around and made Napster! (/sarcasm)

    Ugh. Double ugh. I don't know about the rest of you, but I don't even consider Napster a peer-to-peer application. Without the server, it is nothing. Sure, the file transfers are between clients of the same server, but then again, both clients are acting as mini servers. On the same terms, I could call AIM P2P software, because a connection between clients is needed for file transfers. And hey, IRC is P2P also, because of the DCC functionality in lots of clients.

    As usual, the whole thing is blown out of proportion by the media. File sharing has been around for a long time. Pirating has been around for a long time. It's nothing new...
  • I can only agree with what you said, but the author you replied to was listing suggestions on:

    How to make the successor to Napster
    Besides, I would like to see the superGUI which would handle file-sharing, number-crunching, message board, instant messaging.... but actually both Napster and iMesh [imesh.com] are growing into something just short of being a web browser with all that functionality.

    -Kraft
    ----------------------------
  • The new equivalent to Napster will have to be P2P as much as possible. I agree that it will have to be as simplistic as possible (college students who think AOL is cool are among are target audience), but there are the requisite security issues.

    IP filtering: what can be done to make sure that IP addresses aren't shared?

    Firewalls/Proxy: Can we come up with an easy utility for users to work through these things?

    Linux: Goddam, if I only had a Napster client on my Linux box.
    ------------
  • I agree that the moderation of the nazi thread as offtopic is wrong. A few bad moderations from this discussion: 61,74 should be modded down, 17, 22, 44, 45 are offtopic but should be flamebait, 25 is offtopic but should be troll, 54 is 0 troll but should be -1 troll, and 43 is unmodded whereas it should be +5 informative (mod me up! please! i need karma! up up!).

    This sort of thing isn't isolated to discussions with no article to read. The problem is a flaw in the moderation system. To make sure all bad posts are moderated and poor moderations can be fixed we need more moderation points. Perhaps allocating 7-10 instead of 5 (with a limit on how many can be applied to a single user to prevent vendettas) would improve the situation.

    As the parent post pointed out, there is a need for a "stupid" rating. This should be added. There is also the problem of 2 moderators applying different ratings resulting in a poor description of the post, so 2 ratings should be displayed.

    Yes, the -1 layer does seem to be getting worse. We need a new layer for particularly bad posts. This way, those who wish to read the highly entertaining things like the panty thread can see that in -1, while people who post the same screen space wasting neonazi porn 5 times a day can get slapped down to -2.

    I do think anonymous posting should be kept, however. There are occasional good posts from AC's, as well as quite a few amusing ones, and the people who post crap can and do get accounts. They have enough free time not to be deterred by having to set up a new hotmail account every week. Above you suggest required registration with registered users being allowed to post anonymously, but still allowing the account to get trashed. This is unworkable for the simple reason that anonymity is lost: slashdot must keep a record of who posted in order to trash the account. If the post, for example, pisses off a corporation whose unethical practices are discussed they could sue the poster and have the court require slashdot to turn over all information about the poster. If they don't have information the poster is safe, but if their email or ip address is turned over they could be in a lot of trouble.

    Oh, another thing. Moderators, failed first post attempts should be modded down as redundant, not offtopic or troll.

    Finally, I think I should be given an infinite number of moderation points. And no, contrary to rumor I will NOT mod first posts up to +5 Informative. Really. That's not why I want mod points so much, I swear.

    btw, please don't mod this down, my karma is precious! I'm a karma whore [karmawhore.com] and i have no shame [shame.org] whatsoever!

  • by powlette (198002) on Sunday March 11, 2001 @04:49AM (#371569) Homepage
    I'm not one to priase Shawn Fanning, who is obviously a punk kid, but the idea behind the protocol is very good and after studing Gnutella, Freenet, Blocks, and just about every front end out there, I've found they all have a long long way to go.

    None of them are as fast, easy to use, or reach 1/10th the number of people that napster does. I believe the solution, at least for the short term is not to create new protocols from scatch, but rather to make the napster protocol itself more distributed.

    Rather than having centralized napster servers which can easily be shut down, it would be ideal to have linked servers that not only can share the load, but will propogate the index of the other servers. Then we need some way to make the napster client aware of the servers and find one at random, possibly through a webpage with an index of registered servers, or by contacting other clients on the network similar to gnutellas node catcher.

    The client would also need to automatically jump from one server to another if the server goes down because of connectivity or tragic law suits.

    The server could even be built into the client so that you can check a box and start operating as a server which would publish your ip to others on the network. Once there are suffcient numbers of napster client/servers out there and changing every hour it will be very very difficult to shut down, and for the average user, they can just download the new client and it will work as it always did.

    Napster should abandon their filtering direction and release this type of distributed, fail-over, server as open source along with the matching client. Once its in the wild they can shut down their own servers and just host the napster webpage. They can still enjoy millions of hits to their site, with no libability of running the napster servers.

    I suspect even the RIAA won't be able to get a restraining order against a software company to make them stop creating software. Other wise netscape/ms/apache and any other software that allows computers to communicate is next.

    So please, before we abandon the well known napster protocol, let's slightly change it to be more resistive to attack. Opennap may be headed in this direction but until i can visit their page and see 10,000 opennap servers all listed and linked together, they're just inviting themselves to another lawsuit.

  • Yes, it's true that there will be many copies, and many of them will be relatively "near" me on the Internet. However, it is not expected that all (or even most) data will be one hop away. Any data that is more than one hop away will end up being proxied (transferred) 1 time for each server. That's how anonymity is provided. If we're talking about really big files like MP3s, number one, that means that, as a Freenet node, I'm passing through a LOT of traffic to the people near me proxying through me. Number two, it means that, when I click "download," it's got to be actually downloaded and uploaded two or three times for the average case. If one of those links in the middle has a 56k (or reboots in the middle), it's gonna be a REALLY slow process.

    Akamai caches, yes, and so does Freenet. But Akamai doesn't proxy your download through three other peers.

    You then finish, "Freenet doesn't do a very good job of this because it doesn't take network speeds into acocount...But it will be quite simple to add support for that and will be done after a few more releases." Yes, and, if you do that, it'll become a backbone network, because all clients will "want" to connect to the few fast peers. If I'm the RIAA, I just DMCA notice all those fast peers' ISPs on the same day, and the network falls over.

    Regardless of specific attacks, my point is that Freenet (or any P2P) is, by definition, going to be more complex (and hence more expensive) than a simple, centralized system, and will therefor only be used when the primary overriding concern is trying to hide who is distributing the content, not for people who want to put up family pictures.

  • Yes, it's true that there will be many copies, and many of them will be relatively "near" me on the Internet. However, it is not expected that all (or even most) data will be one hop away. Any data that is more than one hop away will end up being proxied (transferred) 1 time for each server. That's how anonymity is provided. If we're talking about really big files like MP3s, number one, that means that, as a Freenet node, I'm passing through a LOT of traffic to the people near me proxying through me.

    You're forgetting just how powerfull the automatic adaptation of the routing is. Simulations have shown again and again that the number of hops for any data is kept *very* low, about 5 hops for a large network with lots of data. As data is requested the routing tables get optimized so that the node that returned the data is likely to be contacted again for similar data. Also on insertion the data is more likely to go to that node.

    Also should you ever request data twice it's one hop. I know I request a heck of a lot of data multiple times...

    It's not as efficient as normal internet routing for unpopular data but way more efficient for popular data. And I'll guess that a good %70 of the data I use would be classified as "popular", slashdot front page and stuff linked off of it for instance. (this is ignoring dynamicly created stuff which Freenet can't do anyway)

    Number two, it means that, when I click "download," it's got to be actually downloaded and uploaded two or three times for the average case. If one of those links in the middle has a 56k (or reboots in the middle), it's gonna be a REALLY slow process.

    As I mentioned network speed will be taken into account in future versions of Freenet so your 56k modem will only be used as a last resort.

    As for rebooting nodes just plain don't get rebooted often. Nodes that don't aren't on 24/7 just don't get added to routing tables. Freenet is *very* harsh against nodes that aren't always on. Probably more harsh then it should be really.

    You then finish, "Freenet doesn't do a very good job of this because it doesn't take network speeds into acocount...But it will be quite simple to add support for that and will be done after a few more releases." Yes, and, if you do that, it'll become a backbone network, because all clients will "want" to connect to the few fast peers. If I'm the RIAA, I just DMCA notice all those fast peers' ISPs on the same day, and the network falls over.

    The fastest peers won't be the same for everyone. It's far faster to connect to a system 3 hops away from you then some system on the other side of the planet. Anyway Freenet is already biased torwards nodes with big datastores. And if you do knock out those nodes not all that much happens. I've tested this on my node and if I block access to even a node with %70 of the references things return to normal in a few hours. And it's unlikely that any node would rival the hundreds of smaller nodes around.

    Regardless of specific attacks, my point is that Freenet (or any P2P) is, by definition, going to be more complex (and hence more expensive) than a simple, centralized system, and will therefor only be used when the primary overriding concern is trying to hide who is distributing the content, not for people who want to put up family pictures.

    Any *free* system is going to be used for illegal stuff if the users can get away with it. Freenet still provides many advantages over centralized servers when you're trying to serve large data. The automatic caching makes the slashdot effect just not happen. The *more* often a site is requested the faster and more reliable it is to get the site. I use Freenet, this is true and works really well even in spite of all the bugs and other nastyness in the network right now.

  • FYI, O'Reilly [oreilly.com] recently had a big P2P conference in San Francisco, you can find out lots more information on P2P, especially on who all the current playing in the market are and what they are making, here [openp2p.com].

  • You're right, it wouldn't be trivial to make Gnutella talk to Freenet talk to Freehaven. However not being trivial does not mean it is a) not possible, or b) not worth doing.

    In fact Brandon Wiley a Freenet developer has a really good article about this in the O'Reilly P2P book.
  • I'm glad everyone has decided to just drop this copyright thing and move along with the P2P developement. Now I feel we can all get something done...
  • apparently we have been stealing from the companies that charge 14.99 for a cd that costs them about 50 cents to package and market.

    It costs more than that to market a CD. It costs to record the music, it costs to license the samples used on the CD, and it costs to produce promotional tools such as a website or music video.

    Napster is going down for all the wrong reasons.

    The "wrong reasons" you're referring to include the fact that recording artists who publish their work for a free download [mp3.com] have a very hard time getting their work on mainstream radio thanks to under-the-table payola systems that the RIAA and NAB maintain.


    All your hallucinogen [pineight.com] are belong to us.
  • by Anonymous Coward on Saturday March 10, 2001 @10:29PM (#371576)
    For discussion of p2p issues and apps please visit infoanarchy.org [infoanarchy.org]

    And if you are looking for a good p2p don't forget to check the resources page. Thankyou.

  • That was the fastest slashdotting I've seen.
  • Did anyone see that article before it was slashdotted?

    _ _ _
    I was working on a flat tax proposal and I accidentally proved there's no god.

  • But my favorite Peer-2-Peer file transfer system is still just sending files over ICQ.

    It's not as easy, it requires I actually interact with the other person, and it's not always reliable, but on the other hand I normally know the person pretty well.

    "Everything you know is wrong. (And stupid.)"
  • but it looks like the site is not answering up or something.

    I am have yet to see the article, under 3 different browsers...

    between that, and the unusual number of drunken trolls... I think I'll wait wait until morning before trying to making a comment.

    New scientist usually does good articles, even if a little light in content (no more that a dozen paragraphs or so).

    Should be interesting.

  • Alt.sex.stupid.racist.slashdot.ac

A CONS is an object which cares. -- Bernie Greenberg.

Working...