Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Music Media

Closed Gnutella System to Prevent Bandwidth Hogs 251

prostoalex writes: "Salon.com is running a story on Gnutella developers contemplating the creation of a closed or authorization-only system to prevent bandwidth hogging. Turns out, numerous applications, including Xolox and QTraxMax employ quering algorithms that are capable of bringing the network traffic to a halt. While it gets better download speeds for the users of the aforementioned applications, the damage to network traffic as a whole is substantial."
This discussion has been archived. No new comments can be posted.

Closed Gnutella System to Prevent Bandwidth Hogs

Comments Filter:
  • by MarsBar ( 6605 ) <geoff@@@geoff...dj> on Friday August 09, 2002 @04:21AM (#4037940) Homepage
    If you make a system which allows this kind of abuse you should expect it to happen.

    The solution is not authentication - it's building better network infrastructure.

  • Ozone! (Score:3, Interesting)

    by B3ryllium ( 571199 ) on Friday August 09, 2002 @04:21AM (#4037941) Homepage
    Now would be a good time to plug the free, recently-opensourced Ozone file sharing program. It interfaces with MUSCLE/BeShare [beshare.com] servers to allow people to share files without worries of AdWare and SpyWare and junk like that.

    Ozone [ozone-o3.net] - Available for Linux, Windows, and OS X.

    Beryllium's BeShare Server - use "Beryllium.BeShare.Com" inside of Ozone to check it out!

    Enjoy :)
    • Re:Ozone! (Score:3, Interesting)

      I use it daily from work.

      Totally indespensible when you have a tough coding problem, and need instant
      coding help.

      I rely on many friends from the BeOS Community to help me out, and I in turn
      do the same for others.

      It's what makes us a very friendly bunch, to be sure.

      I only wish there were more features in Ozone, but it's open source now...
      perhaps someone from the linux community will help us poor souls out?

      (hint hint... nudge nudge... there's free chocolate in it for anyone up to
      the task... honest! ;)

      Seriously though... the entire muscle/beshare system is TONS better than
      anything I've ever used elsewhere when it comes to just working, and
      connecting with a real community, instead of faceless creatues sucking your
      bandwidth to get the latest Britney. (ugh)

      Ozone. It's cool.
      Muscle. It's even cooler.

      You can find more information on Muscle here:
      http://www.bebits.com/app/962 [bebits.com]


      Definately worth a read.

      -Chris Simmons,
      Avid BeOS User.
      The BeOSJournal
      http://www.beosjournal.org
    • Ozone! (Score:-1, Advertisement)
  • Anyone else (Score:3, Interesting)

    by jchawk ( 127686 ) on Friday August 09, 2002 @04:29AM (#4037955) Homepage Journal
    Is anyone else reminded of the book animal farm after reading this article?

    • Actually, I'm very much reminded of John Nash's theory, which ws elegantly described in A Beautiful Mind. If every does what's best for himself, everybody blocks each other, as we see here with what happens when each client tries to maximize his or her own search requeries. What must happen is that everybody on the network must do what's best for him AND for the network, in this case backing off from queries, not auto-promoting to supernodes, etc.
  • by jukal ( 523582 ) on Friday August 09, 2002 @04:29AM (#4037957) Journal
    Here's a clip from an email I sent sometime ago to someone, it might or might not have something in it, judge yourself.

    - the system must reorganise itself automatically based on current
    analysis of the nodes available on the network. - the system must have a dynamic trust model, based on "paranoia".
    - the trust model must be utilized in combination of other characteristics of each peer(node) to select best population of the nodes as more important servants. Untrusted/neutral nodes are not to be given any crucial tasks. No-one can do anything crucial alone, confirmation for the action must be confirmed from other trusted ones. - All functionality of the network mut be replicable automaticly. Tasks done by any node must be transferrable transparently.
    - Weak nodes will not be given any "community work"
    - Every node must pass constant quality criteria to be able to perform any actions on the actual network.

    Just to mention a few points. In short, anarchy does not work - even in P2P networks. We need a government, but one which is always on move, but still governs population using strict - but adaptive - rules. :)
    • This has all been implemented in GNUnet [gnu.org].
      • Yup, it indeed seems it seems to have good bones, thanks for reminding to check the site again. Although the mail I posted was more about grid computing than P2P in the meaning of gnutella, napster, gnunet - which is mostly file-sharing to me - there still are the same fundamental issues to be solved. So, what I wanted to say is that GNUnet does not implement what I meant, but it could provide a good basis.
  • by Glowing Fish ( 155236 ) on Friday August 09, 2002 @04:30AM (#4037959) Homepage

    The biggest problem with gnutella is not technical. It is that gnutella was invented so that true hardcore underground people such as myself could complete our collection of harcore underground things, such as the entire run of Evangelion. However, gnutella is cluttered with people only interested in Brittney Spears. Here is an idea I first proposed on everything2 [everything2.com] for making gnutella less crowded.


    Gnutella is one of the best things to come out of Sedona, AZ since the hordes of Alien Invaders who passed through the vortex. At leat for those of us who have DSL or better, Gnutella is the best way to complete our collection of Evangelion episodes, obscure hip hop mp3s and fets.com sets.

    The problem with gnutella though, is that it is crowded, and according to my estimates, about 75% of this crowding is due to people looking either for mp3s of that damn song that plays on the radio every half hour and\or nude pictures of celebrities. Often to compound matters, these people are looking for nude pictures of that one celebrity that sings that damn song they play on the radio every half hour.

    If we have a tool that allows us to download obscure 90 minute long epic techno ballads from the Slovak Republic, why are we allowing people to use it to listen to music that they can hear by turning on MTV?

    The answer is because we don't know how to stop them. But I have a possible solution for our problem. All it requires is for about 100 or so people to put a file in a shared directory called "Brittneyspearsbarebreasts.jpg" or something along those lines. But instead of said picture actually being of Miss Spears beare breats, why not make it something else...such as possibly goatse.cx?

    After seeing this picture one too many times (which will probably be the first time), many people will cease to use gnutella as a vehicle for their pop culture stupidities.


    • This reminds me of the people that were putting MP3s out on Napster with random chicken noises embedded or just laughing. It seems that it just pissed people off but didn't stop them from searching and downloading what they wanted.
      • The record industry is already doing this in order to pollute P2P networks.

        All is does is piss off dial up users, it doesn't stop them, they just keep searching.

        Salon's article on the practice [salon.com]

        I think having an enforced standard for the Gnutella protocol is the the sensible way to go. If you're going to design a protocol, do it properly and completely, which includes specifying exactly and clearly what a supernode is and how it should behave. If you don't clearly define every aspect of the protocol then it is going to break down as people interpret it in different ways.

        A protocol has to be a set of rules or it isn't a protocol by definition.
    • by gripdamage ( 529664 ) on Friday August 09, 2002 @06:04AM (#4038084)

      All it requires is for about 100 or so people to put a file in a shared directory called Brittneyspearsbarebreasts.jpg or something along those lines. But instead of said picture actually being of Miss Spears beare breats, why not make it something else...such as possibly goatse.cx?

      What is interesting to me is that this would be EXACTLY what freeloaders would do if sharing was required. Just something to think about for people who think they have the freeloader issue figured out. It's a lot more difficult than it seems, since file names and file sizes say nothing about the quality of the content being shared.

      Also if current Gnutella clients were simply amended to have the option don't allow people with 0 files in their library to download, how long would it be before a client was produced which falsely reported files in it's library, files which didn't exist and you can never download.

      • don't allow people with 0 files in their library to download

        Then what about one file?

        Besides, making the network trade-only leads to a chicken-and-egg problem for new users. How are "honest" users (the ones willing to share) supposed to get into the network in the first place? Where does a new network member get her first audio or video file?

    • by Anonymous Coward
      Evangelion is not obscure. If you have any respect for the people who created it, you will buy the DVDs. It's not hard, just go to animecastle.com, amazon.com, bn.com... take your pick.
    • Interestingly, this is more or less one of the proposals put forward by the RIAA: flood the P2P networks with files named after popular songs, but containing garbage only.
    • How arrogant.

      Everyone has just as much right to use the P2P network as you. They can search for whatever they want. Who are you to govern what is a proper use of the tool?
    • In case anyone was wondering, TWAJS.

      Although I am glad to see that it somehow sparked a fight about the validity of Evangelion's ending. Perhaps we can also have a Rei vs. Asuka debate?

  • by Anonymous Coward on Friday August 09, 2002 @04:41AM (#4037971)
    How about implementing per-node policing using a credit system like gnunet? (http://http://www.gnu.org/software/GNUnet/)

    Nodes individually keep track of the behavior of their neighbors. Bad or expensive behavior like out-of-spec activity or excessive querying lowers the 'credit' of the node. Good behavior like answering queries increases a node's credit. Credit determines the probability that a node's queries will be answered or passed along and the priority with which they will be treated. Abusively written clients will eventually be ignored out of the network.
    • possible flaw? (Score:2, Interesting)

      by Erpo ( 237853 )
      I've done everything short of examining the code for GNUNet and a possible flaw occurs to me. From your post:

      Bad or expensive behavior like out-of-spec activity or excessive querying lowers the 'credit' of the node. Good behavior like answering queries increases a node's credit.

      How to write an "abusive" client that is still serviced by the rest of the network:

      1. Create queries at the request of the user and send them. Re-query frequently to increase search results (a la Xolox) ["karma" decrease]
      2. Respond to all queries with an affirmative "I have that file!" message ["karma" increase].

      Abusively written clients will not eventually be ignored out of the network. Users of abusive clients will get better search results and clog other clients will false query hits in the process. In the long term, users will have to migrate to abusive clients to be able to get search results thus crushing the network.

      I may be wrong - I only have coding and protocol development experience with gnutella servents. Hopefully the good GNUNet developers have come up with an elegant solution to this problem, but it doesn't seem like it on the surface.
      • 2. Respond to all queries with an affirmative "I have that file!" message ["karma" increase].

        That's already a problem that's been dealt with by Gnutella clients -- you would have servents that would respond positive to any (and all) query requests (typically by appending '.html' to it and returning a redirect to some spam page).

        The easy answer is that the newer clients have an option to send out queries for random data every so often -- anything that answers affirmative to those queries gets ignored. Simple and effective.
    • Problems about UDP (Score:2, Informative)

      by r6144 ( 544027 )
      I tried GNUnet last month. The most serious problem I see is that it uses UDP, so I can be flooded by UDP packets (sometimes about 30KB/sec) even after I shutdown everything about GNUnet. And there is no way to stop them --- even icmp-host-unreachable errors aren't respected. The UDP flooding didn't calm down till the next day.

      I hope such problems are fixed now, but older clients will continue to eat my bandwidth. I don't want to make my ISP unhappy by letting lots of useless packets in.

  • by af_robot ( 553885 ) on Friday August 09, 2002 @04:44AM (#4037978)
    How about enforcing UPLOAD/DOWNLOAD ratios to all users?
    So users won't be able to download without contributing to other user...
    • by DNS-and-BIND ( 461968 ) on Friday August 09, 2002 @05:09AM (#4038019) Homepage
      They had those on BBS's. They sucked. Unethical people uploaded trash files for credit. And the rest of us, frankly, ran out of quality files to upload after a while.
      • by DrVxD ( 184537 ) on Friday August 09, 2002 @06:02AM (#4038076) Homepage Journal
        > And the rest of us, frankly, ran out of quality files to upload after a while.
        That's implicit in ratios, though. Ratios are - by definition - about quantity over quality. As you point out, imposing UL/DL ratios increases noise.
        • What about a grading system as well? Have 2 ratios running side by side

          Quantity Ratio

          and

          Quality Ratio (Which is determined by a persons grade of the downloads they have from that server)
          • > Quality Ratio
            So people download loads of crud to reduce their quality DL.
            Or they vote up the "quality rating" of stuff they've uploaded.

            Of course, this is very client/server oriented, and doesn't translate well to P2P anyway.
      • by af_robot ( 553885 ) on Friday August 09, 2002 @06:15AM (#4038091)
        Well, my idea is slightly different..
        You don't have to upload files manually - all you have to do is to share specified amount of traffic before you can download more from other users.

        Example: you want to download 600Mb file from other users. Admin server will check your account and verify amount of traffic you allowed to download If you don't have enough traffic stats you have to wait until somebody will download something from you. .

        Good example is Edonkey protocol: then downloading big file you HAVE to share parts of it in order to finish download.
    • Raw byte ratios = bad.

      You need good administration and tight surveillance of users to make that work as intended.

      Neither of which are feasible or good ideas for something intended to be another network layer.
    • No. Download ratios are bad. There's no easy way for someone to start getting in such a "closed community" because, at the beginning, you just don't have interesting files to upload. You have two choices: Upload lots of crap (and probably get kicked/banned) or be ethical and just don't upload crap but wait maybe weeks until your friend comes over and gives you CDs to upload.

      Another possibility was to visit other networks or BBSes (this is where this ratio stuff started) which don't have ratios, download stuff there and upload it on your ratio net.

      But - if such networks exist, why use the ratio ones anyway? On the other hand, you would piss off those who are running the non-ratio net because you were just leeching like hell.

      Download ratios actually hurt the whole community very seriously.

  • Heh, actually, I think this could be a good idea. Besides improving overall network performance, these authentication measures could help prevent possible malicious attacks by RIAA bots poking around the network. I don't think we need to worry about any sinister plot to force users to use spyware-laden clients, either. Gnutella is rooted in open-source development and would be just as pissed if that happened if they were in our shoes. They wouldn't let it happen.
  • GNUNet (Score:3, Interesting)

    by flonker ( 526111 ) on Friday August 09, 2002 @04:46AM (#4037984)
    There is a P2P network layer called GNUNet [ovmj.org]. I've studied the papers on it, and the design looks extremely solid and resilient.
  • P2P and DOS Attacks (Score:3, Informative)

    by herwin ( 169154 ) <herwin@theworldELIOT.com minus poet> on Friday August 09, 2002 @04:48AM (#4037990) Homepage Journal
    I have gotten the impression that these P2P networks are not good netizens. I access the net via a dial-up connection. Within a few minutes of logging on yesterday morning, I found myself dealing with what appeared to be a DOS attack on port 6346 coming from an adsl connection in Lithuania. I have that port blocked, so I was seeing a large queue of security alerts from my firewall. This has not been the first time this has happened with one of the P2P ports. Shto/WTFO?
    • by Erpo ( 237853 )
      What probably happened is that you snagged an IP previously used by a gnutella user when you dailed in. You're getting 6346 connection requests because the IP you're currently using is in the host cache of one or more gnutella nodes out there that are trying to connect. If it really bothers you, reconnect and get a different IP. Otherwise, wait for a bit and they'll realize you're not running a servent and stop trying to connect.

      You're right, though. Most gnutella servent software out there doesn't behave very well.
      • I posted this elsewhere, but I have a static IP and I know there has not been a gnucleus client there for 2 months. I still get a barrage of 6346 connection requests 24/7. It doens't bother me, the bandwidth used is trivial and the firewall stops it all, but it amazes me that some luser app is still trying to connect to me 8 weeks later!
        • I think that a lot of people just leave things queued up to do searches until they find the file. This can go on for days/weeks, especially with more obscure files. I have noticed the same things though. After using a P2P client, requests keep showing up for days. That is really one of the things that I don't like about a lot of Gnutella software.

          When I used Windows, I never experienced such a thing with WinMX, but then again, it is much less decentralized.
  • by Anonymous Coward on Friday August 09, 2002 @04:57AM (#4038006)
    IIRC, the big players on the Gnutella network at this point (Limewire, Bearshare, etc) are able to exchange version information, and to confirm that version information. If this is true, and it's not possible for a rogue application to masquerade as another servent, I believe it's time to lock abusive servents out of the network. If they aren't playing fair, don't let them play at all. Period.

    This means you, XoloX. As well as all the other servents which send requeries at ridiculously short intervals, send download requests tens of times per minute trying to force their way into a download slot, support downloading but not sharing, encourage or emphasize web downloading as opposed to participating in the Gnutella network, etc. Freeloaders are as much a problem as they ever were, but (IMO) only because they're being allowed to be such a problem.

    The time has come when abusive servents need to be shown the door. I don't mind sharing most of the time. But when the same asshole is hammering me 100 times per minute trying to get a download slot, or sending the same query every 5 seconds trying to find more sources, my desire to share files goes down the toilet. Something needs to be done.
    • by DNS-and-BIND ( 461968 ) on Friday August 09, 2002 @05:17AM (#4038028) Homepage
      The Gnutella developers see their mission as bringing a new, revolutionary network protocol to the masses. Something on the level of a new HTTP.

      The Asia-based Qtraxmax developers see their mission as getting as many software(spyware?) installs as possible, through promising a superior user experience, and they would cheerfully destroy the network to do so.

      Obviously, the solution is a new Gnutella option, defaulting to "on", that says "deny resources to abusive clients".

    • by ShaunC ( 203807 ) on Friday August 09, 2002 @05:17AM (#4038029)
      I agree with you that some of the more abusive clients are getting out of control. I don't agree with blocking them outright, though. Gnutella is where it is because it's an open network and an open protocol; I think we have to leave it that way if we expect any future genius to appear on the network. Closing things up and locking the doors, these aren't the appropriate solutions IMO.

      I think filtering of abusive apps should be done on the client side of the servent equation. The biggest problems I've seen lately don't involve Xolox specifically, but users of varying servents. People who queue up hundreds of different files to download at a time. People using programs which ignore "Not Shared" or "Refused" replies, and continue to pound my box looking for files that don't exist.

      I was out of town for a few days last week (all computers turned off, except for my router box). When I came back, I fired up my Gnutella program. Without even connecting to the network, I was immediately serving uploads. That means that someone was trying to download from me for three full days while a) the files were not shared, b) Gnutella wasn't running, and c) the freaking computer wasn't even turned on! Come on, servent authors: pay some attention when you get "Refused" or "Not Shared" responses. Drop such files from the queue after 2 or 3 failed tries, don't leave them sitting there for eternity.

      I want a setting that says "drop all packets from hosts who request a no-longer-shared file." I want a setting that says "drop all packets from hosts who attempt to download while the program is running but not connected to the network." I want a setting that says "drop all packets from hosts who send download requests more than $TIMES per minute." My per-user upload limit is set at 1, so someone queueing up 200 files at a time generates an enormous amount of protocol overhead. It might be 5 hours before that user gets all of his 200 files, all the while he's sending a constant barrage of packets which accomplish nothing.

      Gnutella is an open network. Yes, we do need to do something about read-only clients, but I think it should be up to the people to decide what gets done. Provide the users with the appropriate filters and let the majority determine what behavior is good vs. bad.

      Shaun

      • I have a static IP. I haven't run a gnutella client for, oh about 2 months. I still get gnutella packets bouncing off my firewall at the rate of 4 or 5 a minute. That's insane...
      • My per-user upload limit is set at 1, so someone queueing up 200 files at a time generates an enormous amount of protocol overhead. It might be 5 hours before that user gets all of his 200 files, all the while he's sending a constant barrage of packets which accomplish nothing.

        How about an option in the protocol that transmits the "per-user limit" value on failed requests? How about clients that react on this value?

        Of course, peers that send requests for the same files every few seconds should be blocked. This really hurts bandwidth.

  • by dirtsurfer ( 595452 ) on Friday August 09, 2002 @05:03AM (#4038012) Journal
    The userbase of Xolox and QTraxMax doubled today...
  • by dmiller ( 581 )

    While it gets better download speeds for the users of the aforementioned applications, the damage to network traffic as a whole is substantial.

    Do you expect the same people who use the network predominantly for breaching copyright to care about the greater good?

    • silly (Score:4, Insightful)

      by theLOUDroom ( 556455 ) on Friday August 09, 2002 @09:48AM (#4038807)
      Do you expect the same people who use the network predominantly for breaching copyright to care about the greater good?

      Do you actually think they copyrights they're breaching have anything to do with the greater good?

      Four companies have collectively monopolized music distribution, using copyright. Is this a good thing?

      Get real. Record companies are scum. The artist would get more money if I mailed them a quarter, than if I bought the CD. Meanwhile, I would be giving the RIAA more money to keep it illegal to play legally purchased DVDs on my PC. I hope they all go bankrupt. Then we'll have competition.

      I'll participate in a free market, but not the current abusive, short-sighted ologoploy. Tell me where I could legally download my 300 favorite CDs for a reasonable fee? I can't. Thankfully record companies don't have a long term business plan. They just keep trying to stifle new technology and get their business model legislated. They should be trying to provide the services people want. That's what they'd be doing in a free market economy. They're trying to tell me what I want. They can bite me.
  • by Anonymous Coward on Friday August 09, 2002 @05:07AM (#4038017)
    The S&P 500 and the FBI's most wanted lists are going to be merged.
  • GNL (Score:5, Insightful)

    by TheSHAD0W ( 258774 ) on Friday August 09, 2002 @05:10AM (#4038020) Homepage
    I was a part of the Gnutella development clique a while back, and had made a few proposals [gnuranium.com] on improvements to Gnutella clients.

    One such proposal, GNL [shambala.net], was to provide a way to define alternate Gnutella networks from the main system, and include ways to limit their behavior. Another proposal, GNV [shambala.net], was a method for administering these networks, and said administration could be performed anonymously.

    Many people liked my ideas, until I made the mistake of mentioning that the end result would probably be differentiation of Gnutella into several networks, each specializing in different types of files; it would be like making Gnutella into IRC, with separate server networks providing different flavors of service. I also mentioned that I thought the original Gnutellanet would wither on the vine. They looked on this with horror and dropped my suggestions.

    *shrug* I dunno. Considering that, at the time, the Gnutellanet was scaling itself into bloated nonoperation, I thought splitting the Gnet into different specialty networks was a good idea. Clients could even log onto more than one Gnet at a time.
  • by Todd Knarr ( 15451 ) on Friday August 09, 2002 @05:21AM (#4038032) Homepage

    It's not like this hasn't happened before.

    Sun did it with Ethernet. They set their NICs to use the minimum retry interval instead of minimum + random time like the spec says they must. This got better performance for Sun equipment. Right up to the time where someone put a dozen Suns on a single Ethernet segment and the competition between all of them hammered the network down to 10% of the expected bandwidth.

    Various TCP/IP "accelerators" tried this too, by ignoring the exponential-backoff and slow-start parts of the TCP spec. They too improved speeds for the people who used them. Right up to the point where lots of people started to use them, when the competition between them hammered their transfer rates down to a fraction of what's expected.

    We've seen it on UDP-based streaming protocols, where lack of flow-control mechanisms causes massive congestion problems and slower transfer rates than when flow-control is applied.

    So why didn't anyone expect/predict this when they were designing the Gnutella network and protocols?

    • Because Gnutella wasn't designed, it was hacked up in a weekend as a little closed source Windows file sharing app. Completely unscalable, completely insecure.

      After AOL stamped on the writer to remove the program, lots of people reverse engineered the protocol (which was almost trivially easy), and wrote their own clients. Because it was the time of dot-com mania, lots of commercial and semi commercial applications sprung up using the same protocol, without any of the authors ever bothering to consider whether the protocol was usable at all.

      It's only now, about 3 years later, that we're finally seeing work to move 'Gnutella' into a more workable system (see the superpeer system of Gnucleus, for example).
    • by kriegsman ( 55737 ) on Friday August 09, 2002 @07:50AM (#4038232) Homepage
      This problem was first identified and analized in 1833 by Willian Lloyd. It went something like this [dieoff.org]:
      The tragedy of the commons develops in this way. Picture a pasture open to all. It is to be expected that each herdsman will try to keep as many cattle as possible on the commons. Such an arrangement may work reasonably satisfactorily for centuries because tribal wars, poaching, and disease keep the numbers of both man and beast well below the carrying capacity of the land. Finally, however, comes the day of reckoning, that is, the day when the long-desired goal of social stability becomes a reality. At this point, the inherent logic of the commons remorselessly generates tragedy.

      As a rational being, each herdsman seeks to maximize his gain. Explicitly or implicitly, more or less consciously, he asks, "What is the utility to me of adding one more animal to my herd?" This utility has one negative and one positive component.

      1. The positive component is a function of the increment of one animal. Since the herdsman receives all the proceeds from the sale of the additional animal, the positive utility is nearly + 1.

      2. The negative component is a function of the additional overgrazing created by one more animal. Since, however, the effects of overgrazing are shared by all the herdsmen, the negative utility for any particular decisionmaking herdsman is only a fraction of - 1.

      Adding together the component partial utilities, the rational herdsman concludes that the only sensible course for him to pursue is to add another animal to his herd. And another.... But this is the conclusion reached by each and every rational herdsman sharing a commons. Therein is the tragedy. Each man is locked into a system that compels him to increase his herd without limit -- in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons. Freedom in a commons brings ruin to all.


      The problem in general arises when you've set up a situation where if each user acted in both a rational and self-interested way, the system overall would collapse for all the users.

      When designing any kind of multi-user system, it's critical to plan for the "what if all the users (or half of them) suddenly got very selfish." What results are things like disk quotas: central-system-enforced limits on individual behavior.

      In a system like the gnutella network, where there is no 'central system' to enforce 'community-minded' behavior, the eventual collapse of the system can be predicted as a function of overall population, presuming that there are always a few people who are more selfish than the rest.

      Centralized systems like Napster actually had an advantage in that the centralized servers could establish and enforce 'fairness' policies that kept selfish users from triggeringa 'Tragedy of The Commons'.

      -Mark
      • In a system like the gnutella network, where there is no 'central system' to enforce 'community-minded' behavior, the eventual collapse of the system can be predicted as a function of overall population, presuming that there are always a few people who are more selfish than the rest.

        Sounds logical doesn't it? In fact it isn't necessarily so. Consider the internet, the IP infrastructure is P2P in fact; let's apply what you said to it:

        "In a system like the internet, where there is no central system to enforce community-minded behaviour; the eventual collapse of the system can be predicted as a function of overall population, presuming that there are always a few people who are more selfish than the rest."

        Doesn't sound so obvious anymore does it?

        Actually, this is an example of iterated prisoners dilemma; there is no known solution to that in the general case. It all depends critically on the details. However, in the case of Gnutella, I think that Gnutella lacks some features that would have allowed it to weather situations that Kazaa seems to handle very much better.

        There's always going to be some leeches. The point is to make sure that the leeches don't gain anything by abusing the mechanisms the network supplies- with Gnutella, and to some extent Kazaa, they do gain... if they end up abusing it too much- the network dies.

        • It seems to me that the Tradegy of the Commons kicks in when the 'leeches' hit a certain density within the general population, and when their 'leeching' begins to have a measurable effect on the average non-leeching individual.

          And, in fact, we have seen exactly this kind of thing [slashdot.org] kicking in in certain parts of the Internet, like broadband service and pricing. AT&T has started separating out the 'leeches' ("heavy users") from average users, and applying negative feedback (higher prices) to their leeching behavior. Again, you can see how it takes a centralized administration (AT&T) to bring the system back into balance.

          So you can either (1) hope that your system never becomes popular, or (2) hope that the denisty of leeches in your population never exceeds a certain 'thermal runaway' threshold, or (3) hope that the very worst leeching behavior doesn't substantially degrade service for everyone else, or you can (4) design the system so that at least one of those is true. Since popularity is desirable in a p2p system, and there are always some leeches, you need to design in limits to how much leeching one user can do -- an interesting problem in an open-source, p2p network.

          -Mark
          • AT&T has started separating out the 'leeches' ("heavy users") from average users, and applying negative feedback (higher prices) to their leeching behavior. Again, you can see how it takes a centralized administration (AT&T) to bring the system back into balance.

            No no. AT&T are very able to control the bandwidth available to anyone on their network, lookup up 'traffic shaping'; it's interesting that they have chosen not to do this. Apart from a few crackers there are no leeches.

            The real point is that most people who buy a broadband contract off them don't understand what they have just signed, so when congestion occurs, they start moaning. AT&T aren't going to go "well you shouldn't have signed the contract if you didn't understand it", so they've created this fictitious 'leech' guy who is supposedly stealing all the bandwidth. Then AT&T realised that they could actually make money for bandwidth they had already sold, by charging over a certain download limit- but it's just profiteering, there's no real issue, or atleast not if AT&T are running their network well.

            I don't agree with your 4 'hopes'. These do not cover all the options you have in designing these networks. There's no hoping- you design it to have certain properties. If you write the software, you have central control anyway, in your terms. Every node in a P2P network can be a policeman if necessary.

            • Leeches aren't fictional, and AT&T already knows about traffic shaping. Problem is, traffic shaping throttles your peak or burst bandwidth. For people who don't leech or abuse their connection, it's nice to let them occasionally burst to higher bandwidths. If you apply traffic shaping they won't be able to burst even if it's only 1 time a month for a few tens of megabytes. The billing change AT&T's doing hits leeches for long-term average usage without chopping off bursts for non-abusers.

              I like AT&T's approach. Do a single 10-megabyte upload a month, you get full burst rate. Run a file-sharing server transferring at a megabit a second 24x7, you get hit with a big bill and a warning to either curb your transfers or pay full-time for a dedicated chunk of bandwidth.

              • I like AT&T's approach.

                How many shares do you own? *snicker*

                Like the previous poster said, ISPs who gouge their users (not "leeches") for using their unlimited connection are simply profiteering.

                The SANE and FAIR thing to do is to use traffic shaping to severely limit the "hogs" rate during peak traffic times so the light users like grandma don't suffer. The more bandwidth you use over time, the less you get to use when it's scarce - but at 3am, even the hog should be able to use his full 2Mbps if it's not being used, because unused bandwidth doesn't cost the ISP anything.

                --

  • by Anenga ( 529854 ) on Friday August 09, 2002 @06:02AM (#4038077)
    Stop the FUD.

    People need to realize that Gnutella is now fastly becoming a big player in the function and value of the Internet.

    Gnutella, in my view (and many others), is not a mecca for porn, warez, and MP3's - but a pool where anyone can share any type of file.

    A bigger trend now showing up is linking to files on the Gnutella network instead of the common http://site.com/file.zip. How does this benfit you? You get faster downloads by utilizing partial file sharing, swarm downloads, etc. It also benfits servers greatly. They now aren't the only source for the download, because once the file gets onto a Gnutella client, it searches for more peers, and shares the load with them. This can save TREMENDOUS bandwidth.

    For example, Linux can link to Linux links as such: magnet:?xt=urn:sha1:(InsertSHA1)&dn=Linux&xs=http: //www.linux.org/linux.iso

    (not an actual correct MAGNET link, but you get the idea)

    When someone clicks that, it opens it up in a Gnutella client. It begins downloading from that source, and searching for the same file on the Gnutella network. Through the entire life of the download, it will continue to add sources. You could then be downloading from over 30 people at once, gaining speeds of up to 10MBPS+.

    Oh, the power of Gnutella. Can KazAa (FastTrack) do that?! (Well, it can, kind of :P)

    Oh, how do you know if that's the correct file? Hashing. Gnutella servents are implamenting hashing now, where each file has it's own hash. So when searching for files, they can swarm you downloads. You are GUARANTEED that all the sources your downloading from are in fact the same file, because they have the same hash (SHa1). That's whats getting the RIAA so scared :P No longer can they infect files and make them the same file size/file name.

    Also new on the scene (well, new as in new popularity) is Bitzi [bitzi.com]. Bitzi catologs hashs (bitprints). You can search through their database, and find files with hashes. Click the hashes, and you can download a file. Each file on bitzi has a "Bitzi Ticket" where you can rate the file. You can mark it "Invalid/Misleading" which means it is not the file you want. You can mark them if they contain virus's too. I can almost hear the sweat dripping from the RIAA Lawyers foreheads.

    Want to see the future of Gnutella? Check out Shareaza [shareaza.com] (WINE Compatable).

    Supports all of what I discussed in this post.
    • You are GUARANTEED that all the sources your downloading from are in fact the same file, because they have the same hash (SHa1).

      It is entirely possible that two different files can have the same hash. SHa1 produces a 160-bit signature. If you have 2^161 unique documents, you are guaranteed to have at least 2^160 duplicate hashes. Hashing algorithms are meant to detect malicious tampering of files, and random errors. They are not meant to guarantee the uniqueness of a file.

      Of course 2^161 is around 3x10^48, so the network won't have that many documents for a long, long time. However, the odds of finding a duplicate are much higher than most people would think, ala the birthday paradox (if you have a room of 23 people, there is over a 50% chance that two people have the same birthday). Similarily, if you have a hashing algorithm that maps into one quadrillion unique numbers (50 bits), you need around 40 million documents before the chance of a duplicate exceeds 50% (and 110 million documents before it exceeds 99%). I'm not going to calculate it for 160 bits (with 2 billion documents, the odds of a duplicate are less than 1x10-9, and I'd have to write a new program to go higher than that).

      That's whats getting the RIAA so scared:P No longer can they infect files and make them the same file size/file name.

      The RIAA can certainly claim that their file has the same size, name, and hash. You won't know for sure until you download the entire file and calculate its hash.

  • by RavenDuck ( 22763 ) on Friday August 09, 2002 @06:03AM (#4038080)
    I'm not a coder myself, and am probably not very up to date on the whole p2p scene (other than knowning that Limewire doesn't seem to work real well on my box at work), but one of the real problems on the p2p networks seems to be trust. With the recent news about entertainment industry bodies seeking legislation to DoS the networks, and the common user experience of crap files on the network (incomplete, or incorrectly labled files), I wonder whether someone could make a system based on the same sort of web of trust model than PGP/GnuPG uses.

    The Keyserver infrastructure is already there, and the apps (like GnuPG) are readily available cross-platform. So why can't p2p clients allow content to be signed, so that you can establish a web of trust as to whose content can and cannot be trusted. Downloading a signature of a file to check it's validity would certainly help reduce the chance of downloading dodgy content. This should be especially useful as you tend to get groups of people who are all interested in the same sorts of files (anime, divx, certain bands, etc), so you could imagine a good web forming fairly rapidly.

    Making a valid OpenPGP key is a computationally intensive task, suggesting that few people would make thousands of them on the possibility they would be blacklisted. They also don't require any form of real identification, making them effectively anonymous. Also gaining a good trust metric would be an incentive to keep the same key, especially if downloading was restricted based on your trustability.

    I can't think of any good reason that this couldn't be worked into an existing p2p network. Whether it would work in practice I have no idea. Anyone who knows more about this than me care to comment? Anyone done it already?
    • I think peer based trust will rapidly become essential element of P2P. Digital signatures for identity authentication combined with some kind of peer based trust combined with some kind of network resource allocation based on trust seems like the way to go if the RIAA is going to start trying to infiltrate the networks.

      The advogato trust metric [advogato.org] and slashdot's moderation system are the most prominent implementations that try to solve the problem of peer based trust. It clearly needs more research.
    • > Making a valid OpenPGP key is a computationally intensive task, suggesting that few people
      > would make thousands of them on the possibility they would be blacklisted. They also don't
      > require any form of real identification, making them effectively anonymous. Also gaining a
      > good trust metric would be an incentive to keep the same key, especially if downloading
      > was restricted based on your trustability.

      I did a project that concentrated essentially on what you say here -- making key (identity) generation difficult. It's easy to make RSA keys (for instance) quickly if you don't care about security (and also difficult to independently verify that the key is "valid"), but I give a way to provide a token along with the key that's independently verifiable and difficult to create. This token can also "grow" in strength over time. Check out the paper here:

      http://www-2.cs.cmu.edu/~tom7/papers/peer.pdf

      We don't talk much about creating a "web of trust" kind of thing, but do talk about "cold hard evidence" of cheating. The next step is to see what other kinds of misbehavior can be audited (and how someone can provide proof of infraction), for instance, sending out too many flood messages onto the network.
  • From my experience making Andromeda [turnstyle.com], the main reason people restrict access to thier files is that upstream bandwidth is limited, and they'd rather keep it for themselves (or a small group of friends).

    If the cable/dsl providers were mostly selling symmetric rather than asymmetric services, I'd bet that those same users would be much less likely to restrict access. Furthermore, I think the providers are well aware of that, so don't expect symmetric service to become common anytime soon.

  • An obvious solution (Score:2, Interesting)

    by reflector ( 62643 )
    and in an effort to give Xolox users faster downloads, its programmers had configured the program to frequently "re-query" the network to check for desired files.

    Unfortunately only Shareaza ( www.shareaza.com ), and, IIRC, Bearshare, have implemented file queueing. It's like giving out a paper ticket at the deli, instead of asking the person behind the counter every 5 seconds if they're ready for you, you can just ask them at normal intervals (60 sec default for shareaza), because your spot in line is guaranteed, and there's no real advantage in asking more often.

  • Big, bad hash DB? (Score:3, Interesting)

    by Jeppe Salvesen ( 101622 ) on Friday August 09, 2002 @06:40AM (#4038131)
    We all complain about the amount of crap (incomplete & low quality files and such) that we receive through the p2p networks. How about someone created a DB where you send the hash, and it returned the actual contents. Maybe you could even send the textual request, and it would return the hashes of files that match - and then you can search for files matching the hash?

    Would this be feasible at all, do you think? It would be an additional p2p distributed network (we gotta make sure the DB is accurate and relatively synchronized, so we can't give direct, universal write access). I'm thinking that you open a socket to the server, and just keep sending requests as you search for files, and as you open files. This way, we would also be able to blacklist files we don't want distributed, blocking those from being returned by the initial search.

    You think the RIAA guy monitoring this discussion just choked?
    • Earlier in this thread, someone mentioned exactly such a database.

      I *believe* it was called Bitzi.
      • Yer right. Bitzi looks close to what I propose.

        However, it seems to be built around a company. That is bad news. This sort of service should be based on peer-to-peer technology, and should not be owned by someone who can be sued. There are of course problems involved in maintaining such a database within a p2p network (collision management, etc).

        Unrelated : If a law enforcement official finds a piece of kiddie pr0n, they could use such a service to find others with the same piece under a different name. On the flip side, the Chinese government would use the technology to track down dissidents who share subversive literature by renaming the files.
    • Any solution that talks about a "database" is probably trouble, because setting up a "database" requires some sort of trusted centralized server, or if done peer-to-peer, is subject to the same sorts of problems that the peer-to-peer systems already face. (ie, what about the RIAA computers that inject their own hashes into the system?)

      One basic problem with relying on hashing for the identification of files is that a malicious user can still send you a file, telling you it has the right hash, and you won't be able to check until you receive the whole thing. (Or you won't be able to check at all if you download only part of the file from them!)
  • by Mirk ( 184717 ) <slashdot@miketTE ... k minus caffeine> on Friday August 09, 2002 @07:02AM (#4038153) Homepage
    This is simple. The solution to the problem of quality of service is just to invite your close, trusted friends onto your Gnutella network and not let the plebs out there know about it.

    [pause]

    Now if only I could find out where those elitist bastards are hiding! :-)

  • Here's what you can do to work towards a better p2p future for everyone:
    • See a new client? Check it out. Improved networks can't take off without a user base. If it sucks, uninstall it - but send a bug report/feature request. C'mon. If you can spend 2 minutes writing a slashdot post you can fire off a quick email.
    • Share files. People think that if they share files it will unavoidably clog their upstream link and slow their downloads (and web browsing) to a crawl. Not true! Simply limit how much upstream bandwidth the client will use to (just a rough estimate) 60-70% of your upstream bandwidth. You'll be amazed at the difference. If the client lacks a bandwidth throttle, a serious problem for tcp-based networks, send a bug report.
    • Get involved politically. Write your congresscritters and tell them you don't want to see competition in the home broadband arena killed by deregulation. Write your cable/phone company and tell them you oppose monthly transfer caps. Call your friends and make sure they're aware of the issues. Vote.

    This is the bare minimum you should be doing if you care about/use p2p networks. If you're not willing to do this, stop downloading. Seriously. If you want to do more, there's a lot to be done.
    • If you're a programmer, join an open source project and develop. Your time and skills are needed.
    • If you're a logical thinker and like analyzing networks and complex node relationships, join a p2p protocol discussion forum. I suggest lurking for a while, though - there's a lot to learn if you're new to p2p protocol design.
    • Whether you develop, research, or both, recognise that other people are going to have ideas that seem stupid to you and your ideas may seem stupid to other people. Don't waste time arguing. Think before you open your mouth (or put your hands on the keyboard) and recognise that the people making the actual coding decisions have an in-depth understanding of what's going on. Really bad ideas are shot down before they make it into the code -- flame wars are never necessary.

    Need a link? Check here [gnucleus.net]. It's a great client if you're windows-bound, it's open source, and it has a lively discussion forum.
    • I want to help, but I've run into snags:

      See a new client? Check it out.

      I don't like blue screens, I don't like spyware, I don't know how to use CVS, and I don't have the second hard disk to hold a Linux installation. (My current hard disk already dual-boots winme and win2k, and FIPS can't shorten an NTFS partition.) Besides, some of the apps let a server administrator kick off any user who connects to the Internet at ISDN data rate or slower.

      Share files.

      I share as much as I am able, but if I share files, I will cut off the person downloading from me when I go offline. Because of how I connect to the Internet, whenever somebody else in the household wants to make a voice telephone call, I have to disconnect from the Internet.

      Need a link? Check here [gnucleus.net].

      Gnucleus is a Gnutella client. I've read rumors that the design of the Gnutella network is not very compatible with connections slower than 64 kbps, which unfortunately is the fastest connection that many users in many geographical areas can afford. To get a faster connection would require either upwards of $500 per month for a T1 or $200,000 to move house. Is it true that Gnucleus will not work well over dial-up?

  • The problem is inherently NP-incomplete.

    You want a system without a central authority that can be shut down, so you create a peer-to-peer system.

    The peer-to-peer system pretends to be a virtual network over a real network using point-to-point links to establish proximity relationships between sets of peers, mostly ignoring physical proximity and bandwidth constraints.

    In order to force the proximity issue and address the bandwidth scaling issues, you invent a concept of "super nodes", which end up being self-selected.

    In order to get better performance for themselves, people play "the prisoners dilemma", and rat everyone else out with clients that gang up on requests to ensure disproportionately favorable service.

    In order to lock out these clients, you create a central authority, but try to make it decentralized (e.g. "karma", voting, self-regulation, etc.) to maintain the original design goals.

    But there are too many strategies to use to attack this. The current "attacks" are taking the form of over-requesting to the point of denial of service... and these are people not intent on destroying the network.

    Say you figure out a way to create forced altruism for requests... the node equivalent of the GPL on source code, when you can't enforce the GPL. The natural reaction will be to move on to the next "attack": the "bad guys" pretend they are multiple nodes by avoiding intersecting connectivity with peers, so that dual adjacency won't give them away, and let them be countered.

    So you move to a different protocol for "super nodes"; you counter the next obvious attack ("pretend to be a super node") by locking down binaries ("blessed binaries").

    But the next attack is to modify the kernel that is running the blessed binaries, and defeat the attack that way (a common "borg" attack on the "blessed binary" NetTrek clients).

    Now take active attacks. "Automatic Karma" can deal with dummy files -- "poisoning"... at least until they start intermixing bad with good. But it can't deal with the other issues, without a client lock-down. At which point, you lose repudiability (original design goal out the window: legal attacks work again).

    The only real way to deal with this is to define a new protocol that is not virtual point-to-point linked.

    And that can be blocked at the routers, unless all other content moves to the same protocol, so it can't be discriminated against.

    The only way you are going to be able to create a "blacknet" is to actually create a "blacknet".

    -- Terry
    • by WolfWithoutAClause ( 162946 ) on Friday August 09, 2002 @09:32AM (#4038710) Homepage
      The peer-to-peer system pretends to be a virtual network over a real network using point-to-point links to establish proximity relationships between sets of peers, mostly ignoring physical proximity and bandwidth constraints.

      Actually, you mostly don't want to ignore these constraints. The P2P should make use of closer servers (mostly, but not exclusively).

      In order to get better performance for themselves, people play "the prisoners dilemma", and rat everyone else out with clients that gang up on requests to ensure disproportionately favorable service.

      I don't see that this is necessarily a real issue. After all the server that has the file you want can keep a queue of requestors, and serve it in strict first come, first served order. 'Take a ticket and sit down over there.' It works. Asking more than once doesn't get you anywhere; and may even get you lower down the list.

      The only real way to deal with this is to define a new protocol that is not virtual point-to-point linked.

      Unclear. Very unclear.

      Now take active attacks. "Automatic Karma" can deal with dummy files -- "poisoning"... at least until they start intermixing bad with good.

      Yes, but users can usually play files before they've finished and cryptographic hashing of file contents can preclude people spoofing files, even when downloaded from multiple servers simultaneously.

    • Ok, but what does this have to do with NP complete? I don't see an algorithm or problem statement anywhere, so what is the O(n) that we are looking at? Go back to bed.
      • I was commenting on the solvability of the problem using P2P as a hammer for this particular screw.

        The GNUtella architecture is broken by design, for the goals it wants to achieve.

        Lack of a choke-point, which was the real design goal for the system: "a napster that can't be shut down by a record company", means that you can't rely on voluntary compliance with social norms, particularly when one of the most effective attacks is non-compliance. Adding security adds non-repudiation, which adds back a legal hand-hold to act as a choke-point.

        You're screwed if you enforce norms, and you're screwed if you don't.

        The GNUNet architecture is somewhat similarly broken (in that it can be censored by ch router blocking), but it's at least a step in cheap right direction for solving that problem.

        It's only if the Internet itself gets away from protocols subject to transparent proxy that end-to-end guarantees can be maintained. For that to happen, it has to be impossible to distiguish between traffic on the basis of content.

        Any other approach, and the traffic will be able to be filtered through intentional failure to propagate.

        The only way you can win is to make it too expensive: if it means shutting down the Internet for RIAA to get it's way, that will never happen, but anything short of that is probably doable. So you have to make it so they have to shut down the Internet to stop you.

        I guess I'm saying that they are attacking the problem at the wrong level because it's tractable at the point they are trying to attack it... like looking for your contact lens under the streetlight instead of in the alley where you lost it, because the light's better.

        Hence "Inherently N-P Incomplete".

        -- Terry
  • Lies? (Score:2, Interesting)

    by reflector ( 62643 )
    "Note that clients like Qtrax and Shareaza allow leaves with limitless numbers of [super nodes]," wrote BearShare's Falco in the GDF. "This incredibly selfish behavior causes a flood of query traffic. Although it maximizes results for the local user, it impacts the network greatly. If every client behaved like Qtrax, Gnutella would surely fall."

    Why does he claim that Shareaza allows limitless numbers of supernodes? Shareaza DOES NOT support more than 10. You can enter any number in Shareaza options, but anything over 10 gets dropped.

    Is he just misinformed on this issue? Or is he just jealous that Shareaza has a better app and he is losing market share to them? ;)

  • Get MS to clean up their act of bandwidth hogging.

    I know there are settings that can be set but most people don't.

    I access a web page, it down loads it to my system.
    I want to printthe same page, it downloads it again.
    I want to save the same page and again it downloads it.

    And what of radio over the net?

    I got dial up at what is suppose to be 56k (earthlink) but they
    only give me at best 28.8 ......

    And I believe I helped finance free cable boxes for other earthlink
    customers .....

    SO what's the deal......with this concern over bandwidth????

    Seems pretty clear to me that my ISP might give me more bandwidth
    and speed if other things I have no control over were better delt
    with, even spam mail accounts for more mail then I get otherwise.
  • by MoNickels ( 1700 ) on Friday August 09, 2002 @09:34AM (#4038721) Homepage
    I'm pretty tired of all the complaints about freeloading on any system even remotely likely Gnutella. It's the same with Carracho, Hotline, FTP, what have you: you will always have more freeloaders than sharers until equilibrium is achieved; equilibrium, though, will never be achieved.

    The ratio of users who have useful, desireable files to share to users who do not will always be low, perhaps 1 to 10 or 1 to 100. This is because the "freeloaders" cannot and do not have files to share until the get them from someone else. They will continue to be non-sharing nodes until such time as the sharers with desireable files open up the portcullis.

    The point of the system is filesharing: Why impose restrictions on its primary function? The way to stop "freeloading" is not to restrict downloads, but to *increase* them. The closer to the unachievable equilibrium we come, the less "freeloading" there will be.
  • by MarvinMouse ( 323641 ) on Friday August 09, 2002 @10:00AM (#4038880) Homepage Journal
    Why not have the clients block anyone automatically who starts to do instant requeries?

    I am not sure exactly how the Gnutella protocol works, but if every valid client had this blocker, then these "super-nodes" would not be able to get any information in or out.

    Basically, the idea would be that when one of the malicious nodes starts to send multiple queries to another node with this blocking code. The other node would determine whether or not this is legit. If it is not legit, that node will be blocked. Eventually, a "fence" would be put up around the offending nodes, and the damage they cause would be limited to non-standard clients.

    As well, it may be prudent to make the block last for a specific time period. Perhaps even add the ability to pass the offending node addresses to other clients so they block as well.

    If the gnutella protocol allows this. It would be the most effective way of preventing malicious clients because as soon as they threaten the infrastructure, they are blocked off.
  • Once you have to authenicate, that leaves the 'authenicators' open for legal issue.. Remember napster???

    Good bye Gnutella..

    Yes something has to be done to clean up the bandwidth, but i dont think THIS is it..
  • by wuHoncho ( 587718 ) on Friday August 09, 2002 @10:45AM (#4039268)
    I've been reading through some of the news and related sites on this topic and it seems the possibility exists that one or more of these gnutella clients that send massive numbers of request in such short periods could actually be a maliciously intended program. Some of the developers who make these have yet to respond to any of these problems even though there have been repeated attempts to contact them about the situation. The way some (I'm looking at QTRAXMAX right now) word their sales pitch, it sounds eerily similar to some e-mails I've gotten with links to these sites or those mysterious 53k-attachments-to-emails-that-just-say-hi-from-so me-guy-named-boris-in-siberia that are so obviously worms or viruses. The way they currently work looks eerily similar to a DoS attack. Use people's own greed to flood a network with requests. It would actually be a pretty clever strategy - millions of users instantly flock to the program to maximize their gain out of gnutella, only to block each other out when they send 83 gazillion file requests a second. Classic Nash.

    Who would be behind such an attack? There are many possibilities. The recording industry is definately one of them. There could be others. Who knows.

    The point is you should all be careful what you install on your computer or even download. Millions of people around the world know how to program at varying levels of control over many different kinds of computers with different purposes. It's like the Force - some use it for good, some don't. There's bound to be at least a couple who are going to write a full-fledged application that is really just one big worm.
  • The problem is that gnutella's reliance on broadcast forwarding and indirect communication will always allow rogue peers to exploit bandwidth or queries in the network.

    There are a number of alternative discovery mechanisms which do not suffere from these kinds of architectural problems.

    For example, NeuroGrid [neurogrid.net] and alpine [cubicmetercrystal.com] both use social discovery and peer profiling to prevent bandwidth hogging or query spamming.

    There are also hybrid network [sourceforge.net] that use super peers like the Kazaa and Grokster clients.

    There is only so much you can do to improve a flooding broadcast architecture. Gnutella will always have some kind of bandwidth and query problems no matter how optimized the clients become.
  • One Little Problem (Score:3, Insightful)

    by BlackGriffen ( 521856 ) on Friday August 09, 2002 @01:39PM (#4040551)
    If they make it so that they can control who is on Gnutella, won't the RIAA be able to sue whomever has this control? Bad idea, folks. The simple solution is bandwidth limiting, and blacklists for IP's that are abusive.

    BlackGriffen
  • But if we create as much unecessary internet traffic as possible, we'll create so much fibre demand that 360Networks may be able to get their stock up to $23 again, and I'll break even. So keep downloading everyone. Download, delete, download, delete.

Always draw your curves, then plot your reading.

Working...