Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
News

Gnutella Not Scaling? 137

cbull writes "ZDNet Music has an article that makes an argument that "Gnutella is Going Down in Flames". Basically, the argument is that Gnutella isn't as scalable as Napster."
This discussion has been archived. No new comments can be posted.

Gnutella Not Scaling?

Comments Filter:
  • I have finally done it! I've traveled back in time and read an article that was posted on Slashdot two weeks ago. Wooohooo! I'm going to be rich!
    --
  • by Xentax ( 201517 ) on Friday September 22, 2000 @07:14AM (#760928)
    Well, this is just shooting from the hip, but someone should look into writing an improved client for broadband connected users. This client would feature caching of results to and from its immediate connections, and perhaps out to two or 3 nodes distant.

    If you've got a big pipe, and you're going to be connected to gnutella for awhile, this would improve the performance of your client and those closest to you.

    Of course, if you really want improvement, you'd have to build this capability into the protocol. Allow clients to register as either low or high bandwidth. Then low bandwidth clients could do anything, but traffic could only go through them for a level or two. Ideally, you'd want every client to be able to reach a high-bandwidth node within 3-5 hops. A connected client would then note and rely upon these distribution nodes to do the work. Perhaps even reconnect to distributors directly...

    Just a thought. Isn't this the kind of thing that Freenet already does?

    Xentax
  • The granddaddy of peer to peer file sharing is never mentioned on /. Why is that?

    http://www.bigredh.com

    http://www.thestar.com/thestar/back_issues/ED20000 812/money/20000812BUS01b_FI-HOTLINE.html

    On behalf of /. I want to thank our friends from the Great White North, eh.

  • rofl you actually got 11 funny mods ;P jeremy
  • I agree... this death of gnutella is silly considering it's beta software and hasn't even reached version 1.0 yet for crying out loud.

    I've had problems using gnutella, so I don't. I find it to be a waste of my time, and I suspect that most people trying it for the 1st time will be inclined to do the same, until Napster and it's alternatives (scour, mx, etc) go away... IF they go away.

    Eventually Gnutella or its clone apps will improve in quality to where the program becomes useful for newbies and presumably this broken code will be fixed by then.

  • Why is it that people who have never even looked at a gnutella packet stream or contributed a line of code to a client are so damn willing to offer their worthless, uninformed opinions (I'm reacting more to the lame-ass article on ZDNet but the article I'm replying to qualifies also)? Anyone who has taken a close look at the traffic is aware that gnutella has been subject to a denial of service attack ever since the first injunction against Napster was almost enforced. The goal of this attack is to convince the deep thinkers on the net that gnutella "will not scale" and hence to give up on true peer to peer networking in favor of whatever the RIAA does with Napster once they manage to steal the technology with the aid of their judicial accomplices.

    For those who have not bothered to read, every gnutella packet has a TTL (time to live which really translates to hops to live) so that it only gets seen by at most 'TTL' nodes which is supposed to be 7 by default. For those keeping track that means there is really nothing that needs to be 'scaled' regardless of how many total nodes there are. Packets on a "well behaved" network will die a natural death before causing a melt down. We were handling large networks (thousands of distinct nodes) just fine before the attacks began. What we have to do now is release clients that defend against the DOS attack and get enough such nodes out there to restore the previous performance.

    Again for those who have not looked at the issue, the way you can mount an attack on gnutella is to set up as many clients as you can that don't route back any responses and at the same time spew out an unending stream of pings and possibly nonsense search queries (meant to load other clients but not produce query response packets). One way to fight back is to drop all pings on connections that have an inordinate proportion of pings and drop connections that appear to be generating too many queries that don't result in any responses. The first strategy is easy to impement and is in the current version of Mactella. The second is harder and is under development now.

    Those who are loudly proclaiming that gnutella is failing because it does not 'scale' are little more than unwitting dupes of the sinister forces that are trying to generate precisely that impression. It may be true that it can't adapt to overcome malicious attacks but that is far from proven at this point.
  • Probably because a very similar article [slashdot.org] was posted just 10 days ago.
  • Jeez, have you even bothered to look at the packet stream before making your pronouncement? It is being crushed under a stream of meaningless pings, not search queries. And the ominous thing is that those pings are generating so few pongs and queries are producing so few responses. I think the reason is that there are currently so many malicious clients that are not routing any responses and generating a constant babble of meaningless pings. What gnutella needs are clients that are designed to be resistant to denial of service attacks, not napsterization (centralizing).
  • You can always make up for inexperience by being more thoughtful when making design decisions and doing proper research. There's tons of crap on how to do distributed searches, if they'd bothered to look it up, rather than 're-invent' (or degenerate) the wheel, gnutella wouldnt have been facing this now. It's not important to just hack the code. Not investing time in the analysis phase of the software development cycle is what will evoke negative review in the end, from people like ZDNet or whomever analyzes this stuff. If they had paid more attention then, and came up with a good system design, none of this would have crept up. End of story.

    --

    --

  • with is de-centralised architecture and its ability to share all files... not just mp3's gnutella gets around all the copyright issues. And this is where napster is going to die... not in speed or anything like that.
  • Gnutella clients could have a list of user-supplied keywords, and try to connect to other clients with similar keywords. As another poster pointed out, the keywords could include words you've searched for.

    Connecting: Connect to any old node, like usual, then do a search to find better neighbors.

    Refining: Clients could have a speed setting, for moseying around their neighborhood, replacing direct connections with higher-rated 2nd or 3rd order neighbors. Perhaps some degenerate simulated annealing, with a randomness setting that decreases gradually, so at first you're just wandering, then later you settle in near like-minded people.

    Hopefully this would limit the distance most broadcast packets have to travel. It could also make it useful to have file listings and chat within your immediate neighborhood (clients at most k hops away).

    These ideas came out of a conversation I had with someone who wanted to make a "spiritual internet", connecting people with people. Too bad I've been too busy/lazy to implement them yet. If you want to use them, please make it available under the GPL, and let me know. I'll be glad to help once I finish moving in to a new apartment.

    Chris

  • Of course, to do this would require a rewrite of the Gnutella protocols. With the existing protocol, all it would take to destroy the network segregation would be one wayward client to connect to clients on both networks, bridging them, and sending 'poluted' IPs out.

  • > Note that Napster also implements kind of clustering: you see the files of people in your "cluster", not of all Napster users on Earth.

    Really? How disillusioning. Can you explain this in a bit more detail?

    Thanx

  • Is the *real* solution to this. But when? This year? This decade...
    Lord Pixel - The cat who walks through walls
  • by jovlinger ( 55075 ) on Friday September 22, 2000 @07:15AM (#760941) Homepage
    erm. this seems like a problem that is solvable in any number of ways. Replication seems to be easiest. Cache popular content onto fast pipes (provisions for bandwidth limiting are assumed). Encode a forwarding requirement into the protocol -- every file you download, you have to allow someone to grab that file from you. Use multicast and PPV style scheduling (requesters register for a file, letting the server determine when (within a short timeperiod) to multicast it).

    I suprised by this being an issue at all. I haven't looked at the gnutella infrastructure, but these are issues that I would have thought tackled during the initial design.
  • by Sanity ( 1431 ) on Friday September 22, 2000 @07:16AM (#760942) Homepage Journal
    Actually with Freenet it seems to be a log(N) problem. Much better.

    --

  • I don't know if anybody else has noticed this, but all of the Unix clients for Gnutella suck. They seem to range from the unbeliveably confusing and undocumented console mode applications (gnut) to crash prone X applications (gnubile -- never runs more than once between reinstalls!) to leech only clients (gtk-gnutella).

    I'm not going to connect just to leech files, which basically leaves me with 0 options for the client. Plus it seems like the majority of the users are leeches on there already, and those that aren't are on modem connections (and always disconnect the instant someone starts to snag a file from them).
    That's my $0.02
  • by Limecron ( 206141 ) on Friday September 22, 2000 @07:18AM (#760944)
    The problem is quite obvious and has been around as long as peer-to-peer and server based networks have both existed. Peer-to-peer networks work wonderfully when they're small. Server based networks are much more effiecent and thereby are nearly always used for large networks. Can Gnutella still work? Yes, but it will have be divided into smaller networks... For example: You have separate networks for: Pop MP3s Rock MP3s Country MP3s Rap MP3s Jazz MP3s Movies Warez..err..Shareware Of course, each network should have a critical mass and then divide in half when it reaches that point. Wow, maybe I should get programming...
  • There is interesting tendency:
    Napser has central server, keep information on client side, and it works.
    Gnutella does not have central server but still keep information on client side, and it barely works.
    Freenet does not have central server, and does not keep information on client side, and it is not practically usable (yeah, I know it's the most promising technology, but still...).

    Also all of them call themselves peer-to-peer but napster actually has central server, and freenet do not have second peer to connect to at all. Did somebody can tell me what is it - P2P [denissov.com]?


  • May I have your attention please,
    may I have your attention please,
    will the real bruce perens please stand up,
    I repeat will the real bruce perens please stand up
    .....we're gonna have a problem here.........

    Ya'll act like you never seen a slash poster before
    mouse all on the floor
    like mom and daddy just burst in the door
    and started whoopin yer ass worse than before
    they first had endorsed
    buyin' ya a crappy computer (aaaaaah)
    It's the return of the...
    "awww..wait, no wait, you're kidding,
    he didn't just say what I think he did,
    did he?"
    and Mr. Cray said...
    nothing you idiots, Mr Cray's dead
    he's locked in my bassment
    microsoft women love Sig '11
    chicka chicka chicka bruce perens,
    "I'm sick of him, lookit him
    walkin around, grabbin his GNU know what
    flippin' to GNU know who"
    "yeah, but he's so smart though"
    yeah, I probably got a couple of screws up in my head loose
    but no worse than what's goin on in your sister's webcam (eheheheh)
    sometimes, I wanna get on ZD and just let loose
    but cant, but it's cool for RMS to hump a dead GNU
    My mouse is on your link, My mouse is on your link
    and if you're lucky, I might just give it a little click
    and that's the message that we deliver to little kids
    and expect them not to know what a free software is
    of course they're gonna know what Microsoft is
    by the time they hit 4th grade
    they got MS-NBC, dont they?
    we ain't nothing but omnivores
    well, some of us carnivores
    who read other people's mail like crackwhores
    but if we can read your e-mail like it's available
    then there's no reason that a man can't forge spam from your account
    but if you feel like I feel, I got the antedote
    trolls wave your penis birds, sing the chorus and it goes........

    I'm Bruce Perens, yes, I'm the real Perens
    all you other Bruce Perens' are just imitating
    so won't the real Bruce Perens please stand up,
    please stand up, please stand up
    cause I'm Bruce Perens, yes, I'm the real Perens
    all you other Bruce Perens' are just imitating
    so wont the real Bruce Perens please stand up,
    please stand up, please stand up

    Sig 11 don't got to cuss in his posts to get Karma
    well I do, so fuck him and fuck you too
    you think I give a damn about my Karma
    half of you trolls can't even stomach me, let alone stand me
    "but bruce, what if you win, wouldn't it be weird"
    why? so you guys can just lie to get me here
    so you can sit me here next to Natalie here
    shit,Enoch Root's momma better switch me chairs
    so I can sit next to trollmastah and Post First
    and hear em argue over who modded it down first
    little troll, flamed me back on IRC
    "yeah, he's fast, but I think he types one-handed, hee hee"
    I should download some audio on MP3
    and show the world how you released it BSD (aaaaaah)
    I'm sick of you little troll and l33t groups
    all you do is annoy me
    so I have been sent here to destroy you
    and there's a million of us just like me
    who post like me, who just don't give a fuck like me
    who code like me, walk, talk and act like me
    and just might be the next best thing, but not quite me......

    I'm Bruce Perens, yes, I'm the real Perens
    all you other Bruce Perens' are just imitating
    so won't the real Bruce Perens please stand up,
    please stand up, please stand up
    cause I'm Bruce Perens, yes, I'm the real Perens
    all you other Bruce Perens' are just imitating
    so wont the real Bruce Perens please stand up,
    please stand up, please stand up

    I'm like a head trip to listen to
    cause I'm only givin you things
    you troll about with your friends inside you rabbit hole
    the only difference is I got the balls to say it
    in front of ya'll and I aint gotta be false or sugar coated at all
    I just get on the web and spit it
    and whether you like to admit it (riiip)
    I just shit it better than 90% you trollers out can
    then you wonder how can
    kids eat up these posts like gospel verse
    it's funny,cause at the rate I'm going when I'm thirty
    I'll be the only person in the chat rooms flirting
    cyberin with nurses when I'm jackin off to porno's
    and I'm jerkin' but this whole bag of viagra isn't working
    in every single person there's a bruce perens lurkin
    he could be workin at Micron Inc., spittin on your SDRAM
    or in the printer queue, flooding, writin I dont give a fuck
    with his windows down and his system up
    so will the real perens please stand up
    and click 1 of those fingers till you drag up
    and be proud to be outta your mind and outta control
    and 1 more time, loud as you can, how does it go? ...........

    I'm Bruce Perens, yes, I'm the real Perens
    all you other Bruce Perens' are just imitating
    so wont the real Bruce Perens please stand up,
    please stand up, please stand up
    cause I'm Bruce Perens, yes, I'm the real Perens
    all you other Bruce Perens' are just imitating
    so wont the real Bruce Perens please stand up,
    please stand up, please stand up

    I'm Bruce Perens, yes, I'm the real Perens
    all you other Bruce Perens' are just imitating
    so wont the real Bruce Perens please stand up,
    please stand up, please stand up
    cause I'm Bruce Perens, yes, I'm the real Perens
    all you other Bruce Perens' are just imitating
    so wont the real Bruce Perens please stand up,
    please stand up, please stand up

    haha guess it's a bruce perens in all of us........
    fuck it let's all stand up
  • Because if someone in your node has what you need there's no need to go outside for it. And you've saved quite a lot bandwidth and effort(for your computer).

    Remember, the most popular files are also the ones that are most available.

    Coupled with a spell-checker that would slap people in the face before they make useless searches average users would rarely need to communicate outside thier own node.

    Look how well napster works and it doesn't have any communication between nodes. (I know it's not great for obscure stuff. But your average users don't want obscure stuff. That's obviously why they're still obscure.)
  • Wouldn't it be easier just to implement bandwidth-based routing, instead of specifying minimum bandwidth in the search? Right now, I think the search goes to everyone and the low-bandwidth answers are filtered out. If a node were not to forward searches for which nothing can be returned anyway, you can avoid wasting time and bandwidth. All it requires is that the high-bandwdth servers be well-connected, which could be induced by allowing preferences for higher-bandwidth links...

    -_Quinn
  • Hey,

    Perhaps the geeks of the world need to create some more news

    Nah, ww don't want people murdering another member of the royal family just to generate news people care about. That's what normally happens when news gets slow...

    (That was a joke)

    Michael

    ...another comment from Michael Tandy.

  • If this is informative and not Offtopic than I have got VERY informative link about P2P Pages
    Wired's Guide to Global File-Sharing [wired.com]

    This list of 240-plus downloads, services, and information resources - most of them free - is designed for experienced P2Pers and novices alike.
  • I don't think Gnutella is dead quite yet, but you make a good point.

    The idea of pre-categorizing the Gnutella network by file type makes good sense. Split the system into Gnutella for mp3s, Gnutella for software, Gnutella for trolls, Gnutella for pictures, etc...

    This would drastically reduce the size of each network subsection, and would help keep things to a reasonable size for searches. Plus, your results would be more likely to be relevant, due to the fact that everything on that particular network section is at least of the type you are looking for.
  • As I've said before (in this [slashdot.org] comment comparing freenet and gnutella), gnutella's protocol sucks. Senseless flooding across nodes, etc.

  • This is the problem with ALL distributed architectures. Its an N^2 problem.
  • The number of connections that you are describing assumes a fully-connected network. Gnutella is not.

    Think: if there are 3000 nodes connected to GnutellaNet, does your machine have 3000 socket connections open simultaneously? No.
  • That's the problem with a decentralized service. It's basically held together by a large number of lines that can't be assumed to be much better than 28.8 kbps modems. Segmentation is inevitable and it's just plan slow as a result. That's what you get for not being able to be shut down, so don't complain.
  • Fundamental difference in what you're trying to accomplish. With IP, you don't have to send a request to every node of a network. Routers forward your packets directly along a path to their destination.

    -----------

    "You can't shake the Devil's hand and say you're only kidding."

  • by Chalst ( 57653 ) on Friday September 22, 2000 @07:04AM (#760957) Homepage Journal
    Freenet [sourceforge.net] is of course an approach to peer to peer file sharing that tries to address these scalability issues. Shame the article doesn't mention it.
  • There is no website for it, why I don't know...

    There is not linux version available but there is a windows version of it. I think they modified some of the code because it won't connect with any gnutella program except the one it is made for.

    If you want to get the program, you can look in various newsgroups such as alt.binaries.multimedia or my favorite alt.binaries.drwho. If you can't find it still, you can email me.
  • Eventually it will find its own level anyway...

    Impatient larval warez doods will move on to something that better suits them.

    People who use it and like using it will continue to use it.
  • by Animats ( 122034 ) on Friday September 22, 2000 @07:19AM (#760960) Homepage
    The Freenet people haven't figured out how to do distrbuted searches efficiently yet, although they realize that's a problem. They may well crack that problem, but probably not quickly.
  • by biftek ( 145375 ) on Friday September 22, 2000 @07:20AM (#760961)
    Has everyone else noticed that they get a strange sensation of deja-vu whenever reading slashdot. It is rare to find something which is actually news (and new). Perhaps the geeks of the world need to create some more news, to keep slashdot fed and healthy......
  • by Kierthos ( 225954 ) on Friday September 22, 2000 @07:21AM (#760962) Homepage
    *nod* And the search capabilities seem to be remarkably moronic. On a friend's computer, I watched him wade through all sorts of files that weren't even germane to the parameters he'd searched for. In the end, it all comes down to how people describe the files they are sharing over Gnutella.

    On the plus side, he eventually did manage to find every single .mp3 he was looking for... it took him a while, but the thing of it is, some of these files he couldn't find at all on Napster.

    Is there any reasonable way to determine usage stats for Gnutella?

    Kierthos
  • http://slashdot.org/article.pl?sid=00/09/12/121720 0&mode=thread

    Sometime last week I think. ai yai yai
    --
    Peace,
    Lord Omlette
    ICQ# 77863057
  • by AustenDH ( 157687 ) on Friday September 22, 2000 @07:22AM (#760964)
    Therefore it is as scaleable as you want it to be. It is stuff like this that reminds me of the good-ol-days when one had to bitch and whine about missing features, and wait around until the people developing said features would come out of the woodwork.

    There are still people like that in the world today. What a shame! It seems that ZDnet likes to cater to this crowd. So now they are bitching to an entire community, of which they were - by default - invited to participate.
  • When Gnutella first came out I used it quite alot. Then I stopped downloading for awhile because of connectivitiy issues, then a few months later with new versions the service just seemed slower. Think it may have had alot to do with how spread out the network is. As they said unlike Napster there are no centralized servers, it is easy to lose contact.
  • Hi, I'm a hacker for Mojo Nation and I don't think we ever ask you for any personal details.

    Just go to our SourceForge page [sourceforge.net] and either grab the .tgz or CVS up.

    Probably in the future we will ask for some demographic info like age, country, operating system, timezone or whatever in order to get an idea what sort of features we should add to benefit the most users, but at that time, we'll certainly have a good privacy policy.

    Mojo Nation was formed by a bunch of cypherpunks so be assured that we will take privacy issues very seriously. (In fact, the architecture that we've already designed and deployed has full strength crypto and privacy-friendly features integrated throughout.)

    Regards,

    Zooko

    Hacker,

    Evil Geniuses For A Better Tomorrow

  • Moderation Totals:Offtopic=10, Troll=5, Redundant=1, Funny=24, Overrated=5, Total=45.

    Howza! Like it's been said before, Metamoderating's going to get some abuse later on today...

  • Moderation Totals:Offtopic=10, Troll=5, Redundant=1, Funny=25, Overrated=11, Total=52.
    End result: -1 funny (wow!)

    It's a week and a half later, so I doubt that it's gonna get any more moderation. 52 points is pretty massive. I'm impressed.

  • Yeah, even the latest dead tree edition of Wired included this profound (yeah right) idea of Gnutella not being scalable as a small column in their peer to peer bonanza this month. Of course they labled the column "Gnutella: Unstoppable by Design", but I think they were thinking of unstoppable in terms of copyright enforcement and not utility to the user.
  • It sounds as if searching is the only thing to find information in a gnutella/freenet network.

    Is anybody considering that we might want to use the protocol for hypertext? What I mean is such that we can type the unique document identifier in the browser location box, instead of http://www.blablablabla.com

    gnutella://lightbulbscewring.for.dummies.xml
    freenet://why.not.eat.that.yellow.snow.FAQ.html

    When the URL always is a 'search request', then we can't be sure that we get a document each time, let alone the same document.

    How about incorporating some sort of directory system and resource location and/or identification method?

    That could well result in making the web obsolete, old technology. Yes we've seen the Web, it was nice: Time to move on?

  • by crovira ( 10242 ) on Friday September 22, 2000 @10:06AM (#760971) Homepage
    This is an endemic situation with ALL friggin web content.

    If you use search engines which don't check the accuracy of the data they scrounge or run your own with Archie/Veronica types of searches or worse, become your own search engine, snooping on everybody's hard drives, you're going to take longer and longer to retrieve indexes to content that is of more and more dubious quality.

    The world NEEDS MP3.com types of businesses that rate & index as well as store content.
    The world NEEDS engines that can demand micro-payment from the recipient before sending a file.
    The world NEEDS micro payment services like X3.com to catch the pennies and send the content producers their due.

    And SCREW the RIAA, MPAA and other Luddites and SCREW the culture vultures who rip off the concent creators (artists and writers etc.) and rip off the consumers by over charging simply because they put themselves in everybody's faces.
  • I agree with you. However, my point still remains. Gnutella is open source. Therefore, the architecture is also open and available for comment and modification. There is nothing stopping Gnutella from looking for a definable starting point, or home base, if you will.

    I think that anything being open automatically assumes more than just the code which makes up the software, but also the goals of the project. Obviously if, "a true peer to peer network is inherently limited," then an alternate method must be explored.

    Perhaps, instead of a dedicated central server, or a true peer to peer network, maybe the peer to peer part should only contain a list of servers who have currently volunteered.

    At any rate, I don't use Gnutella, or Napster, or whatever, so my input on its architecture may be missing the point. My input on the architecture not being immutable, I think, is far from missing the point.

    Cheers
  • by PureFiction ( 10256 ) on Friday September 22, 2000 @07:35AM (#760973)
    There is a way to start resolving this problem, and it is currently in development.

    The gPulp project is currently working on all of these issues. Check proposals and ideas at: http://gnutellang.we go.com/go/wego.pages.page?groupId=133015&view=page &folderId=136401&pageId=177268&JServSess ionId=3fe61b505308701b.415222.969643886549 [wego.com]

    There is also a server oriented gnutella application which aims to start resolving some of these issues in the near term. Features such as:

    1) Provide a server for broadband / dedicated network users to provide content with a true server oriented gnutella node. This will be similar to a modified apache for singular installations, or a federated distributed server architecture for routing and caching fun.

    2) Remove broadcast push requests (in all future clients)

    3) Proxy and cache support for slow users. This will allow beafy servers to take over some of the load which dialup / slower clients experience. This will be somewhat ala freenet, as popular data will propagate through caches in various nodes. Also, this can provide a level of anonymity which is not present.

    4) Adaptive servers which configure their network connections for optimal efficiency. Not too busy, not too slow, and with the widest distance topologically from their peers (if linked) and fuzzy / reactive propogation algorithms so that TTL's and routes can be dynamically modified as load increases or other factors require.

    There is nothing fundamentally flawed with the gnutella architecture, and it is far from a 'dead' horse'. However, there are significant innefficiencies and complications which are causing problems right now. Rest assured these will be fixed.
  • by Animats ( 122034 ) on Friday September 22, 2000 @07:37AM (#760974) Homepage
    I've pointed out several times that the Gnutella protocol doesn't scale well. It's not impossible to fix this, but it needs a major rethink.

    The basic problem is that small sites either take a lot of search hits to which they will answer "no find", or their index has to be mirrored elsewhere, which introduces centralization. There's an economy of scale to searching.

    So automatic, distributed, redundant, partial centralization is necessary. This is hard. It also has to be reasonably secure against hacking; look at the problems IRC has. It probably needs a reputation service, so people who spam the indexing system lose.

    On the other hand, music interest, being a popularity thing, follows a power law; the music most likely to be searched for will be found easily. A simple hack on Gnutella so that it queries servers slowly, in order, starting at the one with the best response time, stopping with the first find, will keep the thing from collapsing until somebody cracks the hard problems. It's not necessary to crack the general distributed search-engine problem to fix this.

  • by vla1den ( 233261 ) on Friday September 22, 2000 @07:38AM (#760975) Homepage
    Well, actually it's the problem with all server-less architectures. Is you have to have searches you've got to have server. If you want to make it P2P classic -- make the server invisible. One way is to create distributed server. More on this here [denissov.com].
  • by Hobbex ( 41473 ) on Friday September 22, 2000 @07:38AM (#760976)
    I can't understand why this is news to anyone. Those of us who spend time thinking about these things said it right away when Gnutella was released, and we had discussed and rejected the broadcast model for routing several times before that (see the Freenet development list archives if you don't believe me).

    The Math behind it is simple:

    - Every user that that adds Cu amount of capacity to the network (on average).
    - Every user also adds Tu amount of traffic (also on average). However, because of the broadcast nature that traffic is sent to all users, so with N users, each user generates Tu*N amount of traffic.

    This means that the total capacity of the network is:
    C = Cu*N
    (Capacity per user times the number of users). The total traffic on the other hand is:
    T = Tu * N * N = Tu * N^2.

    For the network to work C needs to be greater than T, if T C. You simple cannot win using a broadcat model.

    On the Freenet-dev list we have a standing rule that two words are indecent and offensive: "centralize" and "broadcast". We think we can pull it off without them, but it makes everything 1000% more difficult, which is the simple answer to why Freenet is developing more slowly then the one hundred million Napster and Gnutella variants outthere. That, and the fact that you are not helping us...
  • ...with its distributed queries, limited horizon, and lots of non-working hosts... Anyway, a short-term remedy is to index/filter the network like sharetraxx (http://sharetraxx.com) does.
  • I have a 21-inch flat viewing area sony CRT at home. Handles 110mhz refresh at 1024x768. I am not, however, reading from home.

    Maybe if the public library system had a little more cash...
  • What ever happened to that flesh eating virus that was around before O.J. I've been wondering about that for a while now...especially with this growth I have on my foot.
  • Sure, it might not scale as well, but I'd like to see napster survive RIAA's denial of service attack. Gnapster can survive the injunction attack.
  • Mojo Nation *does* look cool, but since they don't have a privacy policy, I'm loathe to provide my personal details. Let me know when the privacy policy is implemented, and I'll give Mojo Nation some consideration.
  • Even if the protocol is fixed, for it to be successful people need more incentive to put their stuff up for sharing. Gnutella has a major problem with more downloaders than file sharer's right now.
  • The problem with centralization is that it creates a target (legal, technical). This is one of the core bulletpoints Gnutella has --- its "distributed" nature makes it harder to regulate.

    The trouble Gnutella is going to have as it attempts to scale beyond community applications to infrastructure applications (like using the Gnutella community as a search engine for the web) is that large-scale apps won't work across the Gnutella peer-to-peer fabric. They will need to be "directed" by a "persistant" server network of some sorts.

    Building a server-based distributed database isn't trivial, but it is (IMO) a "solved problem". The hard, unsolved problem Gnutella has is trying to self-organize into something approximating an efficient server-based network.

    I think this is somewhat silly. I think that if Gnutella can solve that problem, there are more interesting applications for it than Search.

    I think a more realistic angle for the Gnutella supporters to take is to bifurcate: the distributed, amorphous network is valuable when there are lots of them --- it's very hard to censor. Gnutella needs a directory system (ala Shoutcast's system) to locate these groups. The distributed search system is also valuable, but has different requirements.

    Either problem is realistically solveable in the near term. Solving both of them simultaneously is much harder.

  • Total nitpick, but the function described in this example is not exponential, it is a basic parabolic polynomial: y=.5x^2-.5x. In fact, each unit change in the x variable produces a increasingly smaller incremental change (percent-wise) in the y variable. For this function to be exponential the function should look something like y=2^x. In any case, unless I misunderstand Gnutella, the problem is not that each peer is connected directly to each other peer but that the network is made up of subnetworks which get choked off when one subnetwork is connected to another subnetwork via a connection which is too small (i.e. 56K modem). This is the same kind of problem one has when using a typical star schema and one attempts to make too many joins for the (i.e. the data is a little too normalized). In order to only have one connection, you would have to arrange this in a client-server fashion, which would defeat the whole purpose of decentralizing the search function. The internet itself is a star schema which works fairly well, but only because nobody puts a 56K modem as a router in between two T3s-- which according to the article is the problem with Gnutella right now.
  • My apologies if this is a shallow question that has been thought through, but I've just recently heard about Mojo Nation.

    What happens when the US decides to start taxing purchases made on the internet? Mojo are useless to the government, they'll want dollars. But will there even be a dollar/Mojo exchange?

    --
  • by Anonymous Coward

    Before dismissing it.

    The code, protocol, etc. are all from the 0.6x that Nullsoft released. Besides, it's not like anyone controlls Gnutella. Anyone can feel free to come up with a better protocol - all you've got to do is get people to use it.

  • I was just complaining cause it took up most of my page.
    And those other posts I don't have to see cause they get modded down into oblivion.

  • by Sanity ( 1431 ) on Friday September 22, 2000 @07:41AM (#760988) Homepage Journal
    It is true that fuzzy searching has not yet been implemented - although searching by song title, artist, and album are possible using "subspaces", a mechanism present in our recent 0.3 release. I recently posted a proposal for this to the Freenet mailing-list and I think some guys are working on it.

    The underlying Freenet architecture should actually be quite a good fuzzy-searching system, it is just that we have not got around to enabling that functionality yet as we have been concentrating on getting the underlying architecture right.

    --

  • Napster isn't scalable. Why do you think they have separate, non-connected servers?
    ---
  • I noticed Glandritek looks an awful lot like
    Woodlock. Has anybody told Cutter about this?
    Ah well, gotta do something to put wolf chow
    on the table, I guess.

    Chris Mattern
  • There is an up and coming alternative, its free its just not Free, Filerogue [filerogue.com]
  • Perhaps the geeks of the world need to create some more news, to keep slashdot fed and healthy......

    Quick someone rob a bank or something


    My Home: Apartment6 [apartment6.org]
  • It's just a matter of performance of Napster server(s), not all the clients connections. Thay can upgrade their hardware and it'll solve the problem. Gnutella can do nothing. Gnutella is not scalable. Napster is.
  • When you're using peer to peer, not only are you as fast as the slowest link, but you must go through MANY more hops to do searches. Obviously a centralized service is going to scale better. How many times have you seen Yahoo bogged down (not counting the DOS attacks heh)? Anyone remember the worm program written by that Cornell student in which he unintentionally brought the Internet to it's knees? That was a peer to peer system as well.
  • Why doesn't it show up on SourceForgery now? You don't have to let anyone else work on it until you are finished, and the CVS repository is a great way to make backups. (-as well as perform version control, duh).
    ---
  • There are so many products out there that will do the same thing, with a much user friendly interface. In fact, Intel is making a Gnutella-like clone for corporate Intranet. Besides that, i'd rather have a centralized database and an application server that i could do searches with (if i were a system admin that is). Peer to peer applications have a greater tendancy to pose security risks.
  • I agree. I think that would be BSD. Because I am closely studying Furi's Java code, and I can say that this guy who wrote it, really really knows what he's doing. Great stuff.
  • Yeah, you're right it is Woodlock. Fortunately Richard Pini contacted me and said that he loved it and it was OK to use Woodlock. He was surprised his elves had advanced so far as to distribute all the tech news.

    Brian
    BBspot [bbspot.com]
  • There will have to be an exchange rate, because the taxman will have his due. Some exchange rate will have to be declared, and it will be your responsibility to submit taxes to the relevant state and federal authorities.

    No different from the fact that you are supposed to submit tax from out-of-state mail order purchases directly to your state, and no different from the taxes levied on barter economies like the many community-issued currencies like IthacaHours: http://www.ithacahours.org

    Note if you barter goods you are also responsible for the relevant taxes, for instancing if you "swap" cars.
  • What happens when the US decides to start taxing purchases made on the internet?

    Well, if mojo isn't cash, why does it need to be taxed? It's more of a barter system; I don't think there's tax on that.

  • I would have to agree with this. Gnutella is more the protocol then anything else. It is as way of sharing files. There are many modifications to Gnutella out that that use a different set of servers. ABMNet is an example of one. It's popular amoungst newsgroupies.

    ABMnet is a modification for mostly videos/multimedia. (I use it to snag Dr Who videos). It's much better then just using gnutella for a few reasons. There are less users and they all share similar interests in what files are shared. Less users also mean less queries, and so the servers don't get bogged down.

    By using gnutella in such a way to tailor to individual market needs, it can be much better then napster could ever be. If there were a set of gnutella servers just for a particular style of music, the searches would be faster and performance be much better for the users and the servers.

    If you look at it from this point of view, you can't really extend napster too easily (although it has been done with things like napigator).
  • Remember my message of a few days back about how I wasn't helping because it's written in Java? Well, I got spanked for that (and I've gotten spanked re: other anti-Java postings) so I downloaded Kaffe last night. I'll be doing some ramp-up on Java for a while and if my negative view of Java melts away I'll pitch in with FreeNet.
    --
    Linux MAPI Server!
    http://www.openone.com/software/MailOne/
  • by (void*) ( 113680 ) on Friday September 22, 2000 @07:50AM (#761011)
    Yes, it does not scale. Anyone who has done basic CS101 will tell you that. But this does not mean it is not useful. It just means that it was not cut out to span the entire net. I can see Gnutella working within a college's residential dormitory, for example. Or within an office building. Maybe not to the entire internet, but certainly for small networks, this might still be useful.

    So I don't think Gnutella is going down in flames. Since it is open source, we may take that as a lesson learnt and perhaps rip out the offended non-scalable part and build a better file sharing device that actually works this time.

  • by PureFiction ( 10256 ) on Friday September 22, 2000 @07:50AM (#761012)
    You forget a few vital points.

    1) Every bit of information is NOT sent to every other client. Many requests are dropped, ignored, or simply do not reach their destination when the TTL expires.

    2) The nature of the clients ensures that slow connections have fewer peers, propogate fewer requests, and receive fewer requests than faster ones.

    These two attributes greatly reduce the theoretical maximums encountered when doing math.

    The real world implementation does not even remotely follow the absolute mathematical predictions.

  • Yes, I simplified it, and things are never the same in reality as they are on paper. But the basic observation that traffic grows quadratically (or close to quadratically) while capacity only grows linearly does hold. It is a little like ordo calculations for algorithms - running Quicksort will not take exactly n*log(n) operations and a program will running it will never be ideal, but that does not change the fact that there is always an n for which it is faster than any implementation of an O(n^2) sorting.

  • ZDNet has reported that Microsoft's peer to peer NetBUIE network does not scale as well as Microsoft NetBIOS over TCP/IP network.

    ZDNET has also reported that when you are setting up a network of more than 10 computers using Microsoft NetBIOS networking, you may wish to consider a client-server network as opposed to a peer-to-peer network.

    And after this commercial break, we report on the latest findings about using coaxial cable for network wiring.

    Like, duh!
  • by AFCArchvile ( 221494 ) on Friday September 22, 2000 @07:06AM (#761019)
    Ever seen a list of what the Gnutella client is wading through when it performs a search? I have, and it's not pretty: Metallica, Eminem, Photoshop, Win2k, DivX, Natalie Portman, Britney Spears, 3DSMAX, and a whole slew of smut which I'll omit for the sake of decorum.

    Gnutella was a good idea; it was just taken the wrong way by the moronic serverops who can't avoid sticking a ruler between their legs. Personally, I'd prefer having separate servers for content (mp3 specific network, DivX specific network, binary specific network, etc.).

  • by TheTick21 ( 143167 ) on Friday September 22, 2000 @07:07AM (#761022) Homepage
    I've always thought of gnutella as more of a demonstration than a finished product. While it may not be the best implimentation it shows that distributed file sharing can work well with no central server...its an important step...this version of gnutella may have reached its limit...but there will be more...just some thoughts


    My Home: Apartment6 [apartment6.org]
  • by Derek Pomery ( 2028 ) on Friday September 22, 2000 @07:07AM (#761023)
    In the article they point out that the load could be cut in half by fixing some bad code.
    They further mention that proposals for redesigned version have already been made.
    link from article [wego.com]
    Not only that, it says support and resources for this project are being sought out - it's active, it's open source, what more do we want?
    Given the interest in Gnutella, I don't see any problem finding people to fix known bugs.
    Rather then seeing this as the death of Gnutella, I saw it more as a positive article pointing out known bugs that are being fixed, and announcing a the planning of a new and even more powerful version.
  • From what I have read about Gnutella they also "scale by seperation", meaning that messages do not actually reach the entire network, but only to some Nn number of nodes (this what they refer to as the horizon). Optimally, you would want to choose Nn so that you get equality in the equations from my last port. That may work, but as soon as you have hit Nn number users, you loose the network effect as new users joining will no longer bring any additional value to other users - which defeats the entire purpose IMHO.

    Isn't that what IRC does, thus causing the dreaded netsplits? Actually, I'd be interested to know how IRC addresses similar problems. It works pretty well most of the time.

    IETF RFC page, here I come...

    Vovida, OS VoIP
    Beer recipe: free! #Source
    Cold pints: $2 #Product

  • by Kaa ( 21510 ) on Friday September 22, 2000 @07:08AM (#761026) Homepage
    This is the problem with ALL distributed architectures. Its an N^2 problem.

    Only if you insist on reaching all the nodes all the time. If you can afford to reach only a subset of the nodes for any given request, then the problem becomes one of proper clustering.

    Note that Napster also implements kind of clustering: you see the files of people in your "cluster", not of all Napster users on Earth.

    Kaa
  • The problem with the 56K dialup bottleneck can be reduced by changing the way it uses the network (as the article mentions), and re-organizing the network. It would help if it could detect faster links and group machines based on both service and network characteristics.

    More caching could also help -- a machine might not have a particular item, but it may remember that it recently saw the item. So queries for popular items would quickly encounter a cache entry and be directed to a source. That could be done by having responses be noticed by both the requester and to the machine which passed the query onward -- if there is an "upstream", the popular results would float upward.

  • Ah, I forgot to mention that one, slow and buggy. I think this has more to do with the terrible Java implementation under FreeBSD though (like Java ICQ).
  • There is a reason for this...

    http://bbspot.com/News/2000/7/news_sou rce.html [bbspot.com]

    It's the magical news elf Glandritek.
  • It's stupid. The architecture of Gnutella is what's broken, not the code which implements it. A true peer to peer network is inherently limited.
    --
  • by Phokus ( 192971 ) on Friday September 22, 2000 @08:21AM (#761036)
    Another thing came to mind: Metcalf's law. The power over the Internet is equal to 2 to the power of the number of nodes who are actually on the 'net. If you look at the graph of that, it's exponential. I figure in Gnutella's case, it's power would be inversely proportional to the graph. Any comments?
  • Dude, I take back all that shit I just said, 'cuz I just got bitchslapped for no reason!

    And this shit is WACK, how the hell do you get a (-1, Funny) rating with no Overrated moderations? That's just not possible.

    Someone fix this shit, man, ALL my posts are modded down, even unrelated ones! Who has mod points like that?
  • I used to think Gnutella was great. It had speedy searches and was pretty fast. But now it just sucks. It's not so much that searches take forever now, it's that the searches return so much shit with them. Ads for webpages, viruses, things that don't even pertain to what you were searching for.

    Gnutella needs a replacement and doesn't need to continually get a facelift that makes it look nicer.

  • by bloosqr ( 33593 ) on Friday September 22, 2000 @07:09AM (#761041) Homepage
    Everytime i've tried gnutella i've managed to find nothing in comparision to napster (even wrapster) i've actually tried just randomly downloading things on gnutella i.e. 60k (goatsex) files and just get timed out. I've heard it was much more usable in the summer however. The only upside of the current version of gnutella is that its highly entertaining watching the stream of searches coming in :)

    Its been mentioned before but some ways of fixing the situation may include doing things like making the searches bandwidth related to filter out the modems. Perhaps a better idea would be to have an auto peer mode where high bandwidth connections become servers for a cluster of machines near them. (Gaining mojo points to take the mojo example for instance) Then clients can just search the (relatively) finite connection of high bandwidth high speed servers much like in the form of napster but the client/server analogy is a bit more fluid..
  • Offtopic?

    You americans really don't get sarcasm/irony, do you.

    Maybe it could be a checkbox in preferences "I understand sarcasm and will moderate accordingly".
  • That's why you don't create a hard-coded central location like Napster has. Ideally, each set of 'adjacent' hosts should hold an election amongst themselves to determine which one will hold the index. The election could be weighted by factors such as bandwidth and processor speed. With a setup like this, there's still no central location, but there are dynamically-determined common indexing points.


    --Phil (Why does Slashdot go on and on about Napster and Gnutella but never mention Hotline?)
  • ABMNet is an example of one. It's popular amoungst newsgroupies.

    So where is it? Other than a group of equestrians that like Arabians, and the Anything But Microsoft Net parody of ZDNet, Google dosen't show anything.

    --
    Evan

  • by burris ( 122191 ) on Friday September 22, 2000 @08:37AM (#761058)
    check out Mojo Nation [mojonation.net] which is an open source distributed filesystem that is attempting to address many of the issues that plague systems like Gnutella.

    It uses centrialized content tracking servers, but anyone can run one by just clicking a switch in their client. The content trackers store XML metadata describing the file, so you can search on different fields in different file type categories (easily defineable).

    The the files themselves are broken into small redundant pieces and spread over the network. You only need half of the available pieces to reconstruct the original file. This way the system is resistant to servers disappearing. It also means you distribute your load over many hosts and clients with slower connections can still provide block services.

    The coolest thing is that Mojo Nation has a built in digital cash called "Mojo" and a microcredit system that effectively turns it into a barter system for disk space, bandwidth, and CPU. Whenever you upload, download, search, or otherwise consume another systems resources, you must compensate them with Mojo. The Mojo represents the disk space, CPU, and bandwidth you are using. You can get Mojo by contributing your resources to the network through the client software (it's automagic). This way nobody can consume more resources than they are contributing to the system. Each person that uses it helps to make it stronger. Of course, being a real digital cash system, nothing stops people from sending Mojo to eachother in e-mail and settling the transaction with something like PayPal.

    It's really cool, check it out.

    Burris

  • by pb ( 1020 ) on Friday September 22, 2000 @07:10AM (#761060)
    Some of these problems could be easily solved.

    I think there needs to be a way to tell what the network load on an individual node is, and attempt to negotiate connections with machines of similar connection speeds or ping times up to a maximum load cut-off.

    Of course, there will still be people with hacked clients that report a bandwidth of 0 and a load of 10, but suspiciously have low pings. Those leeches should be killed, or at least swamped with connections...

    Also, it would be nice if the network could re-organize over time, as in, promote people in your segment who give you back successful searches, and cut off branches that don't yield search results. Then everyone who wants free books would eventually find each other, and be separate from everyone who wants free porn (the other 99%, it seems)
    ---
    pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
  • Peer-to-peer searching was just a bad design. It doesn't make any sense to have to hit thousands of different nodes with each search. Of course there'd be a backup! What's lacking with Open Source developers is a basic understanding of data and databases. The two existing databases are sad compared to Oracle and DB2. Until some database expertise (usually, older, experienced developers) comes in the the Open Source arena, bad data designs like the one in Gnutella is going to continue.

I had the rare misfortune of being one of the first people to try and implement a PL/1 compiler. -- T. Cheatham

Working...