Hypernets -- Good (G)news for Gnutella 169
Red Roo writes: "This online article addresses the recent
criticism of Gnutella network scalability by pointing out that it is a Cayley tree. As a viable candidate for massively scalable P2P bandwidth, all trees are dead! But by going to higher dimensional virtual networks (aka "hypernets") e.g., hypercubes or hypertori, near linear scalability can be achieved for P2P populations on the order of several million peers each with only 20 open connections. This concept seems to have been entirely overlooked by critics and developers alike."
The Fly In The Hypernet Ointment (Score:5, Funny)
Firstly we had a continuing problem with dropped packets but things started really screwing up when our time domain packet switching starting picking up packets that HAD NOT BEEN SENT YET!
The collision rate went up to an astounding 53% by the time I was able to pull the plug. In short It sucked big time!
By the way - don't waste your time buying any of those California lotto tickets this week because just before I downed the thing I surfed the net....
love the moderation (Score:1)
Re:love the moderation (Score:1)
Re:The Fly In The Hypernet Ointment (Score:1, Informative)
As much as I would like the open source stuff. Gnutella is not ready for big time for average users who have stuff to share. The windoze version chocked my old 486 at times by doing nothing. Compared to Morpheus, it was like visual basic slopware that uses all lots of resources. There were no contents on the net. Anything remotely interesting were behind firewall and was unreachable at all or has zero bandwidth. There were too much traffic even when not transfering files - about 1/4 of my uplink on a DSL line.
Morpheus on the otherhand was quite usable on the old 486 windoze 95 box. (see below) I run atguard so I didn't even there were popups ads. It takes up only 20% of the CPU time during transfers and is idling most of the time. There are rarely traffic when idle - a few blips every ten's of minutes. It would seem their supernode architecture is the right aproach.
As to contents, there are over 600,000+ users sharing ~650 terabytes of files. It is not perfect, but it is quite simple to use even for the average users who pulled in all kinds of contents all over the world.
Note:
I use 95 because it has the smallest memory & HD foot print - disk space are for shared files. The quietest 486 box generate the least amount of heat and survived 3 extended (2-3 days before I knew) fan failures over the hottest days in the summer without crashing. This is my morpheus (only) machine that runs 24/7 and I reboot it may be every other months. I would not trust my mainbox that have the takeoff noise and hot exhaust of a Harriar to left unattented or overnight.
Re:The Fly In The Hypernet Ointment (Score:1)
Sorry, I just couldn;t resist.
Re:The Fly In The Hypernet Ointment (Score:2)
So, because of the fact that your router is not frozen in time, you can see the future?
Besides, I can't remeber the last time I went to the lotto website and it wasn't down.
More Info (Score:3, Informative)
what a crock (Score:1, Informative)
- a hypertorus has d^2 orthogonal dimensions, not d, as the "article" states
- his explanation of network diameter is a complete fantasy. the latency between points on a network has absolutely nothing to do with the chordal length, as he would well know if he'd get out of his ivory tower and do actual internet studies, as I have
- equating the path length with the peer horizon is utter speculation, and Ritter himself has denounced this type of mathematical voodoo
- "Little's Law," as he calls it, has been discredited at least a half dozen times by researches at Harvard, Stanford, and elsewhere
- anyone with a grounding in mathematical principles knows that there is no such thing as a binary hypercube, any more than there is such a thing as a square circle
I can't believe Slashdot just posts this psuedomathematical nonsense without doing even elementary fact-checking
Re:what a crock (Score:1)
You must be new.
Or at least have an everlasting optimism.
Re:what a crock (Score:4, Informative)
Re:what a crock (Score:1)
-1 Troll on the MQR standard (Score:3, Insightful)
I can't believe Slashdot just posts this psuedomathematical nonsense without doing even elementary fact-checking
At first, I thought you were refering to your own post, and was tempted to give you a couteracting +1 Insightful.
-- MarkusQ
Re:what a crock (Score:1)
Sure there is.
The n-dimensional cube graph, aka the n-dimensional hypercube graph is the graph consisting of 2^n vertices, each corresponding to one of the binary words of length n. Two vertices are adjacent if and only if the binary words they represent differ in exactly one bit position. It has n*2^(n-1)edges.
Binary hypercubes and Gray codes have been used in tons of applications for a long time. They are used in computer architecture, database design, digital communications, solving the chinese ring puzzle, etc.
In short, there is such a thing as a Binary Hypercube.
Re:what a crock (Score:2)
what, you new here?
wish I understood this kind of math (Score:2, Offtopic)
Re:wish I understood this kind of math (Score:2, Informative)
Cayley Tree - A network where the nodes are connected together without any central server.
Hypernet - A network where smaller nodes connect to high-capacity supernodes, which are in turn connected to other supernodes, each with their own sub-network, i.e. the same thing as FastTrack, but without the central encryption servers (at least, that's how I understand it).
In short, this is good technology, but, once you scrape the marketing bullshit off, hardly new.
Re:wish I understood this kind of math (Score:1)
That's an odd way to explain it... (Score:5, Informative)
A hypernet is like a grid: imagine the nodes like the places where the lines cross on graph paper, so each node (except at the edges) is connected to 4 others, in a regular, predictable pattern with lots of cycles. Now imagine a 3d grid, like a lot of stacked sheets of graph paper with each node connected not only North, East, South, and West, but also up and down. Each connected to 6 others, in a regular, predictable pattern with lots of cycles. That's as far as you can go with physical models, but in a freely-connecting network like the internet, you can keep going to 8 connections per node (a 4-dimensional hypercube network), 10 connections (5 dimensions), and so on.
That explanation was for a hypercube, a hypertorus would be like going from a bunch of connections around a circle, to a regular set of connections over the surface of a donut, and so forth.
Either way, it's one huge mass of cycles, lots of redundancy, lots of routing choices. If you cut a connection it doesn't matter much; naturally if one user bogs a connection (or chain of connections) down with a heavy load, it's practically like it's been cut. Hypernetworks give you the freedom to route around traffic jams, and the regular structure (cube or torus) simplifies the routing over an unstructured network of random connections.
Re:That's an odd way to explain it... (Score:1)
What would be really interesting would be if the hypernet could seamlessly connect totally different network types. Unfortunately, I doubt this would work, since even if you were able to translate the search data, the methods for getting the actual files would probably be incompatable.
Re:wish I understood this kind of math (Score:2)
There were supercomputers built by Intel in the early 90's that used hypercube connectivity schemes to minimize connection times for data as I recall. This has been done before.
Re:wish I understood this kind of math (Score:1, Offtopic)
Re:wish I understood this kind of math (Score:2)
Re:wish I understood this kind of math (Score:2)
(Note: This page is much more fun while under the influence of hallucinogenics.)
Neil Gunther and PDQ software (Score:4, Informative)
in the Unix community for giving very
good talks and courses on performance.
Check out his open source performance analysis
softwar PDQ (Pretty Damn Quick)
http://www.perfdynamics.com/Tools/PDQ.html
Babble, I say! Babble! (Score:1)
Check out this article. It explains more or less how we might shift the paradigm of a Gnutella-like P2P network towards a more geometric model that has *much* less bandwidth overhead and possibly much better performance. Beware the big words, but do yourself a favor and check it out.
oh yeah, and if the editor's michael, you can tag this on the end:
i fucking hate all p2p networks, and this is a really stupid concept, but check it out.
Re:Babble, I say! Babble! (Score:1)
i fucking hate all p2p networks, and this is a really stupid concept, but check it out.
and if the editor's Timothy, tag this onto the end:
i bet this doesn't work on windows.
More connections == Better network? (Score:2, Interesting)
In reality, I'm pretty sure no actual large-scale networks are like this, for obvious reasons, but I surmise they tend to be more treelike than gnutella is, where each client tends to try to make as many connections to other clients as it can. Therefore, it should be pretty scaleable; since if each new client is making connections to a bunch of other clinets that might not otherwise have a short distance between them, there aren't really goingto be any bottlenecks.
Re:More connections == Better network? (Score:4, Interesting)
Despite what one of the other posters has said about "binary hypercubes" being nonsensical, they are simply a way of describing the nodes of a hypercube with a binary address. Each node in a 1024 node network has a ten bit address. Each one has to connect only to ten other nodes: the ten nodes which are specified by flipping only ONE bit on that nodes address. This provides multiple redundant paths by which a message can be routed in the case of the failure or congestion of a node.
And by intermittently polling the other nodes to which it connects, it can keep a routing table which optimizes the paths to be used. The major difference is that in Hillis' Thinking Machines the number of nodes was capped, in fact it was a fixed number of nodes. Now implementing a dynamic network in a hypercude or hypertoroid topology brings on a new set of problems such as dynamically re-allocating the hypercude node addresses as users fall off and climb back on to the network. This can be more of a bear than people realize.
Implementing the bear (Score:5, Interesting)
This shouldn't be too hard (at least it doesn't look too hard sitting here on a Sunday morning, half way through my first cup of liquid brains). The key is to note that we can't (and therefore needn't bother trying to) enforce the topology at all times. Instead, we just want to bias the network towards the desired form. For example:
This should at least be functional; no doubt there are a number of clever hacks that could be made...-- MarkusQ
Re:Implementing the bear (Score:2)
Except that you describe routing based on Hamming distance (which won't work because of looping issues) rather than shared prefix/suffix, this sounds a lot like Tapestry [berkeley.edu].
Re:Implementing the bear (Score:2)
Salamander: Except that you describe routing based on Hamming distance (which won't work because of looping issues) rather than shared prefix/suffix, this sounds a lot like Tapestry.
Actually, I was just using the Hamming distance as a rough gauge for "how far is this node from where it should be in a perfect binary hyper-cube" (since in a perfect binary hyper-cube, each node is connected to all the nodes (and only the nodes) at hamming-distance 1 from its ID. So what I described was a way to 1) pick a corner at random, 2) wander towards it, and 3) go to another nearby location if the one you picked turns out to be occupied.
I'm not familiar with the looping issues you refer to. Can you elucidate? (Note also that this would not be so much a way to "route" as a way to "partially broadcast" (e.g., maybe decrement TTL by two when sending "in the wrong direction" or propogate ox-bow removal info when something is broadcast to you via a sub-optimal path). We wouldn't count on nodes being where they should be, just hamming-close.)
-- MarkusQ
Re:Implementing the bear (Score:2)
You're right. I was conflating the issue of how to establish and maintain a topology with that of how to route messages within that topology. I tend to do that, because I find that in real systems the interactions between these two supposedly-separate issues are so strong that they become inseparable.
Upon reflection, I find it hard to judge how well a system such as yours would work. On the one hand, you might run into a classic hill-climbing problem. Two nodes that "should be" adjacent to each other in a nearly-ideal hypercube might be too far separated (in terms of the underlying IP network) initially to find each other before they each settle on local maxima instead. On the other hand, it might not be good thing if they did find each other, because you might end up with a really neat hypercube at the upper level, but each "adjacency" in the hypercube is really a long multihop route across the continent at the lower level.
Trying to find a balance between these two extremes might be difficult. In fact, trying to impose a hypercube overlay-network topology on top of an IP network whose physical structure is most definitely not hypercube-like might be a fundamentally doomed idea. I don't mean to say it's not worth it to try. Only detailed simulation or even real-life deployment can truly provide the answers to these sorts of questions. I'm just saying that this particular problem domain tends to be "swampy"; things that appear promising at first run into pitfalls much further down the road, much to everyone's frustration and indignation.
Re:Implementing the bear (Score:2)
*smile* That's why in real life I tent to bid such jobs by the hour rather than by the project.
-- MarkusQ
Questions about implementing the bear. (Score:1)
Then we can just start making connections, getting their IDs, and dropping the furthest out, yes?
I like the idea of "introduction" via peers - watch addresses that come by, if one is a certain Hamming distance or shorter away from one of our connections, tell that connection. 's that basically how it works?
Finding Hamming distance sounds like a parity-class problem (XOR and sum, right?) so it shouldn't be bad at all. Though is there an easier way for the sum part of it than "if the last bit is one, increment counter, rotate right, repeat"? This might not be a problem, and that's an O(n) loop (right?) - not bad, but is there an O(1) solution? Maybe we'll just want to check addresses at random rather than every one - save cycles for more useful things. Also, saturated hosts will probably not even bother with comparisons - if all n of our local peers are right, then why bother?
If we use the introduction method, won't the "core" of the network [those hosts that stay on for a long long time] eventually fall into place as a proper hypercube?
How do we handle address space conflicts? If, say, the network were to partition itself in two, somehow (links just happened to be dropped in the right way), and then a host comes up that bridges the two segments, what do we do? Worse, what happens if two perfect hypercubes overlap [all n connections in each are only one bitflip away, and all have the same numbers]?
--Knots
Re:Questions about implementing the bear. (Score:2)
If you're trying to minimize Hamming distance, wouldn't it be better to establish the first connection without an ID then ask the peer for its ID and the IDs around it? Then we pick a non-occupied one and use it.
I suppose I'm assuming that there are a relatively few "doors" into the network, and worried about clustering; also, my gut feeling is that starting at a random point lets you disregard a number of potential problems on statistical grounds.
Then we can just start making connections, getting their IDs, and dropping the furthest out, yes?
Hmmm. What you are describing (if you drop my assumption of random starting location) results in what is called flocking. Basically, everyone winds up trying to move to the center of the cloud as they see it. Nice dense connectivity, but (since we aren't charged for hamming-distance) there's no real advantage to it, and it increases the risk of fragmentation.
I like the idea of "introduction" via peers - watch addresses that come by, if one is a certain Hamming distance or shorter away from one of our connections, tell that connection. 's that basically how it works?
Yep.
Finding Hamming distance sounds like a parity-class problem (XOR and sum, right?) so it shouldn't be bad at all. Though is there an easier way for the sum part of it than "if the last bit is one, increment counter, rotate right, repeat"? This might not be a problem, and that's an O(n) loop (right?) - not bad, but is there an O(1) solution?
It's O(n) where n is the number of bits in the address, which scales log(n) with the max size of the network; not bad at all. You could reduce it further by doing a lookup table (array [0..255] of 0..8 = (0..255).collect{ |i| i.bits_set } or some such), but there isn't really any need; the amount of work is << the amount required just to receive a packet.
Maybe we'll just want to check addresses at random rather than every one - save cycles for more useful things. Also, saturated hosts will probably not even bother with comparisons - if all n of our local peers are right, then why bother?
Cynical answer: there's no harm (all of this should be very cheap) and we can avoid having the behaviour change after the beast has been running for hours, which would complicate the code and might open the door for mysterious & hard to find bugs. ("It ran fine for about a week, then it just went cross-eyed and dumped grape jelly onto the hard drive!") If I'm going to leave something running for a long time I like (where possible) to have hitting the same code paths at the end as it was in the begining.
How do we handle address space conflicts? If, say, the network were to partition itself in two, somehow (links just happened to be dropped in the right way), and then a host comes up that bridges the two segments, what do we do? Worse, what happens if two perfect hypercubes overlap [all n connections in each are only one bitflip away, and all have the same numbers]?
This is one of the "you can ignore it on statistical grounds" points. With random starting IDs the odds of this scale towards the odds of all the air being on the other side of the room the next time you inhale. It could happen, but I wouldn't hold my breath.
-- MarkusQ
Re:Implementing the bear (Score:1)
A client can do everything you suggest (to try to shape the network) based on its IP address. This way, the current TCP/IP network routing can be used, which is probably more comprehensive and efficient (in real-world terms) than any "virtual" network would be. Even better... it could be fitted into the current Gnutella protoco by only adding to the connection process.
Re:Implementing the bear (Score:2)
Sounds good...I'm reluctant to give up the random address selection but this seems reasonable...hmmm...
-- MarkusQ
Re:Implementing the bear (Score:2)
Excelent point.
Even a quite large hammer won't change this.
But a clever wrench might. Hmmm. Let me think on it...
-- MarkusQ
After a little thought.. (Score:2)
Not really. If the present system is tree-on-(uncorrelated)tree, you are still better off going to hypercube-on-(uncorrelated)tree. I agree that you'd be much better off going to a hypercube-on-tree where the hypercube is "rotated" to align with short hops, but that doesn't mean you have to do it that way.
Given that we can't do it perfectly, we can still try to do "reasonably well". As a first stab:
This should drift towards shorter edges without breaking the pseudo-hypercube. Even though you have more IDs active, it shouldn't affect the amount of traffic at each node if the routing/broadcast rules are set up right. Note also that there is no additional routing trafic required to do this.Keep in mind that some problems are very hard to solve but easy to aproximate; often RightAnswer=NP, 99%Answer=O(N).
-- MarkusQ
Re:After a little thought.. (Score:2)
No, we have a considerable advantage over the military networks; they are responsible for everything (down to making sure there is power, EMP hardening, etc., etc.) while we just have to worry very top. Specifically, the ping times here are at the IP level, not at the p2p level. Since someone else is kind enough to maintain that level for us (and with much greater stability than we could hope for at the p2p layer) the (IP) ping times won't change when (p2p) nodes come & go.
I don't think your algorithm is dynamic enough to cope without having the nodes broadcast their presence in some way.
Announce, yes. Broadcast, no. In a world with no cheaters, all you need to do (once connected to the network) is talk to your peers (& nodes they refer you to). No broadcasts needed.
If there are cheaters, you need to do a bit more (e.g. buffer & broadcast proportionate to the depth of the expected cheating). It might seem like this would be an open ended arms race, but in order to pass as good nodes the cheaters would have to act like good nodes and if things go far enough the only way to cheat would be to be a good node for a significant fraction of the time. So you wind you winning after all, at least as far as the network topology goes.
As for dealing with cheaters, it just moves the problem to a new level (I won't go into more details since I don't want to give any ideas to the bad guys), but as far as I can see it wouldn't be any worse than what those gnutty lime guys face now. Ultimately, the bad guys could get themselves spiffy uniforms and go door to door with guns, matches & branding irons.
I don't know of a topology that could handle that.
-- MarkusQ
Question ... (Score:2)
As far as I understand it, a lot of the models used in scalability analyses of Gnutella seem to assume a homogeneously connected network. Whereas analyses of the Internet show there to be (a) a few highly connected sites (b) a large number of sites that are not well connected at all, and (c) a tendency for new network connections to appear on already well-connected sites, rather than on less-well connected ones.
I was wondering if there would be a difference between a network's scalability if (a) the distribution of edges between nodes was considered homogeneous, or if (b) the distribution of nodes was considered as skewed (e.fg. a Zipf distribution). Is case (b) more scalable than case (a)?
Thanks.
Re:Question ... (Score:2)
Does it matter? (Score:1)
Sure, as the number of users gets larger, the quality of my searches stops getting better. But I don't care because the quality of my searches are already good enough.
This whole thing sounds like sour grapes from people who want to control from a central server.
Re:Does it matter? (Score:3, Insightful)
Don't break out the champagne yet... (Score:1, Flamebait)
Tell your local member of parliament you think this fucking stinks. Or bye-bye P2P, scalable or not.
Re:Don't break out the champagne yet... (Score:3, Insightful)
But I'm not convinvced of this particular threat.
It would require worldwide cooperation and at every level of computing. It would be difficult to draft an international law AND define what a computer was. Does my digital watch need DRM?
To get this law in one country (probably the US) is going to have to implement it unilaterally. Chaos will ensue. I think it's just too much hassle for a government to embark on.
So mark my words, and then punch me with them when you can't play your
Beware WIPO and Geneva (Score:2)
We, all of us, tend to concentrate more on our own domestic politics than international political trends, especially when thinking and discussing things like privacy, encryption, and yes, the digital copy prevention technology euphemisticly referred to as digitial rights management (DRM).
But keep in mind that the Hague convention is already passing and allows, indeed requires, any national law regarding copyright (and via the DMCA that includes copy protection, i.e. DRM) to be applied to every signatory country.
And there are other treaties being railroaded through Geneva by the Copyright Cartels as we discuss this.
I too once hoped that the DMCA would make the United States so uncompetetive that the problem would be self-solving. This would be true, were it not for the fact that the corporate interests pushing these sorts of things are doing so internationally (both locally in various parliaments, eg. the UK, the EU as a whole, and Australia and at the international treaty level).
In five or ten years Americans unilateral rewrite of copyright law into criminal law would make it uncompetetive in the new, digital economy (with ripple effects into other parts of the economoy most likely), but we are not going to have five or ten years before things like the DMCA become international law and the playing field levelled once more, at a much lower common denominator.
Re:Beware WIPO and Geneva (Score:1)
"be pure, be vigilant, behave" - a two sided coin
Re:Don't break out the champagne yet... (Score:1)
Re:Don't break out the champagne yet... (Score:1)
----BEGIN divergence from topic
Haha! In the good 'ole US of A we kicked out thy tyrats and their tyranical parlimentary system! The American bicameral system and electoral college provide a much better form of abstraction, indirection, and power balance in order to minimize the influence of special interests and big money. The founding fathers were true visionaries and foresaw the problems the Europeans are having.
As recent events will show, we Americans could never... oh wait, sorry, I was sleep-typing again. It appears to have been a rather good and almost realistic dream. Such a shame I woke up. Maybe I should get myself some extra strongcoffee and pay extra close attention next election instead of sleep-voting again. If nobody votes with a clue, how is congress supposed to get a clue... This is representative government, remember?
Seriously, if you start voting with a clue, and cluing your neighbors in and actively helping with campaigns you beieve in, then the system will work coser to the way the founding fathers intended. THe system was designed for a bunch of riled up revolutionary citizens, not the bunch of apathetic winers we've become. (Yes, I have volunteared my time with a political campaign and I have written my congresscritters and I have ubmitted an opinion on the MSFT case.) You can't expect the legislature to have a clue unless you personally have done something to clue in the voters. It's that simple. Wasting my time doing something is better than doing nothing.
----END divergence from topic
I wish DRM circumvention wasn't driven by greed in most citizens, but it still represents some sort of civil disobedience in the same way that visiting speakeaseys durring prohibition. Writing P2P systems and using them shaws congresscritters in some sall way how we feel about DRM legislation. I just wish it was a more altruistic demostration.
Yes, I do realize that the poster's choice of language appears to place him/her on my side of the "big pond".
Re:Don't break out the champagne yet... (Score:2)
Good lord!!! (Score:2)
Re:Good lord!!! (Score:3, Interesting)
I can grab files, with no small amount of effort, from an online file sharing service, and maybe get 2 GB a day. If I network, in person, with people who share similar tastes in music, I can get a lot more bang for the buck. As soon as we see larger portable technologies (It's already happening), trading media will be just like trading games in user groups was back in the 80s and early 90s. We all just bring our 300GB portable disks to meetings, link them all up, and take them home to copy onto our TB home systems.
The only thing that would prevent this from happening is a very rapid growth of broadband, to something like reliable 10Mbit levels. I don't see that happening before hard disk space grows to the sizes I quoted above.
Re:Good lord!!! (Score:2)
python & forte 3.0 tonite and java 1.4 last nite... not bad for 6.5 kB max bandwidth per sec (56kb modem is 6.5 kB transfer rate... I think they probably just use bit to make it sound faster).
Correct me if I'm wrong. (Score:3, Insightful)
Correct me if I'm wrong here, but, the article seems to be saying that packet switching is more effiecient than old style circuit switching (?hierarchal switcing?). It says that bouncing stuff around nodes connected to a bunch of other nodes and letting the stuff find a path to its destination is more effiecient and scalable than any kind of tree structure where stuff goes down to the trunk and back up to a different branch to reach its destination.
Unfortunately with the way things are set up right now, I think our beloved internet is set up like a toroid instead of a cube. You have a backbone as the middle loop and then coming off of it are rings that are local that provide service to local ISPs. Then they sell to thier end users. In the end I picture a fuzzy torroid. And according to the article, those are more scalable than trees, but not as much as the cube. However the article says that they are harder to implement than the cubes, not so, as they seem to have evolved in the marketplace naturally, and setting up a cube like network in the real world is harder.
But they're talking about this applied to software, and virtual networks, not real world hardware. However, seeing as how the real world has moved from a tree based telecom system, to the torroid sceme of the current system, it would be interesting to see what happens when the torroidal system in the real world runs into scalability problems and goes for the cube.
Re:Correct me if I'm wrong. (Score:2)
As long as it has some sort of geometric shape, we can make searching more efficient.
Gnutella has no shape, It's a handful of spaghetti thrown on the floor.
The there's Fastrack.. it's like throwing a handful of spaghettios on top of the first pile of spaghetti.
Re:Correct me if I'm wrong. (Score:1)
And their are many examples of cubic networks in use. Just look a some schemes for beowulf clusters.
p2p evolution (Score:5, Interesting)
- routing pushes instead of broadcasting them
- caching pings/pongs, and even queryhits
- use of UltraPeer/leaf relationship, which increases the speed at which traffic is routed
There are other ideas that Gnutella developers like those at Limewire have been kicking around, which are similar to ideas that publishing networks like Freenet and MojoNation have, such as data specialization (ie. queries are directed to those likely to have the data, not broadcast to the entire world).
I'm glad whenever mathematicians or people with specialities like traffic analysis examine existing p2p systems, or give their ideas on p2p systems - they might come up with some good ideas or give a good critique that clarifies elements of a p2p network. This paper is certainly less arrogant than ones with names like "Why Gnutella Can't Scale. No Really". A hypernet is an interesting idea, although I can think of a number of reasons why current p2p sharing networks would not implement them. Namely, because authoritarian networks like Napster were shut down by trade associations like the MPAA/RIAA, while more anarchic networks like Gnutella are more immune from such actions - we must consider not only the survival of the scaling network due to technical constraints like Dr. Gunther does, but also it's survival due to legal constraints orchestrated by large corporations. Then there's the question of how many peers the network is designed for - scalability is just one factor in the reasons why I would use a particular p2p client. Luckily, we will have competition between p2p networks like FastTrack, Gnutella, Freenet and MNet (Mojonation), and perhaps different ones will be used for different purposes, just like Usenet, distributed.net and so forth.
Exactly backwards (Score:4, Insightful)
A hypernet is an interesting idea, although I can think of a number of reasons why current p2p sharing networks would not implement them. Namely, because authoritarian networks like Napster were shut down by trade associations like the MPAA/RIAA, while more anarchic networks like Gnutella are more immune from such actions - we must consider not only the survival of the scaling network due to technical constraints like Dr. Gunther does, but also it's survival due to legal constraints orchestrated by large corporations.
I think your concern here is exactly backwards. Specifically, higher dimensional topology would decrease the need for central "UltraPeer"s (also known as lawyer bait) and thus make the network harder to shut down. If the trend towards depending more on some "peers" than others continues to the natural limit, you wind up right back at Napster (one UberUltraPeer to rule them all, and in the darkness...get eaten by a grue if it's lucky, sued by the RIAA if it's not).
On the other hand, if the topology is made more scalable, the targets won't be as tempting at any given network size, and the whole thing would be harder to take down by force. If all nodes are equal, cutting one will likely create enough publicity to attract seven more to take its place.
-- MarkusQ
Hypercube (Score:3, Informative)
Hypothetical and theoretical (Score:1)
Hypernets -- The speedball connection (Score:1)
Already being implemented. (Score:3, Informative)
we've created a new Gnutella hierarchy, whereby high performance machines become 'Ultrapeers'. These machines accept connections from many LimeWire clients while also connecting to the rest of the Gnutella network. Moreover, the Ultrapeer shields these 'regular' LimeWire clients from the CPU and bandwidth requirements associated with Gnutella, directing traffic to clients in an efficient manner. The reason you see only one connection in your connections tab is because you are a LimeWire client connected to an Ultrapeer. Unfortunately, not all Ultrapeers are as good as others. If you find that you aren't getting many search results with the Ultrapeer you are connected to, simply disconnect and connect. You'll probably connect to a different Ultrapeer, which is more 'connected'. Also, as time goes on and the network grows, you'll receive more results. Moreover, we are currently working hard to ensure that any Ultrapeer you connect to will be well connected - stay tuned to future versions of LimeWire.
My success with the new structure is mixed. Downloads and searches seem to work almost as well as before, but I'm getting considerably fewer uploads, which must mean that, someone, somewhere, is getting screwed. Limewire itself is not a bad little product, it's main claim to fame, of course, is that it runs well on both Mac OS X and Linux.
Re:Already being implemented. (Score:1)
Prototype P2P s/w in 64-dimensional binary space (Score:2, Interesting)
Every node takes a random 64-bit number as address (collisions are possible but unlikely with 64 bits) and once seeded searches to position itself within its closest peers. The distance to another node is simply the number of different bits in the address (plus a higher weighting of high bits in case of a tie).
Upon this infrastructure, a group mechanism is implemented, where any member of a group stores all directory information for that group, and a list of known hosts for the identified content, as well as the peers for the next group.
Groups are hierarchically arranged, like a directory. Membership in one group mandates membership in all "higher" groups, up to the "root" group. Therefore, it is possible to navigate through the whole system like through a file directory tree.
Source code is available for the Macintosh (think like "Hotline" without servers). It still has a minor memory leak, limiting stability to a couple of hours, and several other drawbacks that prevent it from becoming full featured, like not being able to reach behind NAT, or limited protection against malevolent nodes.
Ultimately, I stopped development because an IP-to-Content relation can be established and therefore the network is attackable-by-content. If anybody wants to pick it up and push the work ahead, the source (PowerPlant Net classes & UI) is up for grabs. Contact me at "komet163@gmx.net".
Thought this might interest someone in the context of this multidimensional network discussion...
Re:Prototype P2P s/w in 64-dimensional binary spac (Score:1)
Re:Prototype P2P s/w in 64-dimensional binary spac (Score:1)
Peers should seek other peers based on the quality and better connected peers should automatically move to the center of things. FastTrack works quite well, and it only has one layer of organization (regular vs. supernodes). It might be better to use nature as an example and have some simple set of rules peers would follow to naturally find the best spot for whatever they're doing (the old "ant" idea). You could factor in things like bandwidth, how many searches produced good results, how much overhead traffic is being generated, file types shared, etc. Peers would then keep or drop connections based on these factors. The goal being for each to find its ideal spot. Not just for connectivity, but productivity. Peers with similar searches and files could naturally group together making it easier to find the files you actually care about.
A simple approach to the IP address issue would be for each peer to generate a GUID and use this like a dynamic DNS name (similar to your 64bit idea but less chance of collision). You could then find another peer even when it re-connected under a different IP and more intelligent routing could be set up.
Trouble: this network topology requires authority (Score:5, Insightful)
This is because it assumes the peers are already arranged in the network in the topology one wants.
If a central addressing authority exists, it is no problem to simply give new peers addresses and the addresses of their neighbors in such a way that the network acquires any topolgy the authority wants. The authority can even cope with peers leaving the network more or less arbitrarily.
However, a real question is -- how do you get peers to "self-assemble" into the desired topology in such a way that a small population of peers that choose not to play by the generally accepted rules cannot dramatically effect the outcome. In other words, how can peers be persuaded to place themselves on the points of a cubic hyperlattice solely by contacting a few already installed peers, some of which may not be telling the truth?
Re:Trouble: this network topology requires authori (Score:3, Insightful)
Moreover, what are we to make of this?
The dominant constraint for hardware implementations of high-dimensional networks is the cost of the physical wires on the interconnect backplane. Since the hypernets discussed here would be implemented in software, no such constraints would prevent reaching the desired level of scalability.
I really don't see how you can sweep the actual physical infrastrucure under the rug like this. Eventually, virtual hypercubes turn into real packets on a real network. A network that is subject to the very same topological limitations this article discusses. Any wonder that the Tandem Himalaya architecture he mentions was implemented in hardware, rather than as a "virtual" topology implemented on top of a traditional TCP/IP network?
When it comes to complicated mathematics like this, though, intuition has often led me astray...
Re:Trouble: this network topology requires authori (Score:3, Insightful)
It doesn't. The problem with the tree topology used by Gnutella is that it utilizes the available bandwidth in a fashion that becomes highly inefficient as the number of nodes becomes large. At around 10^6 nodes, it uses 15-20% of the total available capacity, whereas a hypercube topology at 10^6 nodes uses essentially 100% of the available bandwidth.
So obviously, if your application runs up against the physical capacity of your underlying communications systems, you can't send more data. But via correct choices of virtual network topology, you can ensure that the physical capacity is being used productively.
Re:Trouble: this network topology requires authori (Score:2)
Would you say that the performance of the system would depend on your choice of which virtual nodes are connected to which other virtual nodes? Or can you say that the performance of a virtual hypercube topology can be considered completely independantly of the underlying physical network?
Re:Trouble: this network topology requires authori (Score:2)
The issues examined by this paper revolve around things like, how many copies of a given packet need to be created to deliver it, how many nodes must it traverse, and what subset of nodes and links is carrying a disproportionate amount of traffic. This last issue is why the hypercube comes out on top in this analysis -- traffic is perfectly distributed over all nodes and links.
What this analysis does not consider, though, are complications such as routing protocol overhead, and the mapping of virtual links onto physical ones, among other things. While you can consider a computer and its phone line as a unit, you really need to think about the fact that, if your network spans two continents, a large number of your virtual links are going to be sharing a single physical connection. But again, to first order you can neglect these effects.
Re:Trouble: this network topology requires authori (Score:5, Informative)
...to say the least. Besides the obvious algorithmic problems of establishing and/or maintaining such a topology in an environment where nodes enter and leave at such a high rate, there's a serious overhead issue. Any serious discussion of ad-hoc routing protocols (which is what this is) nowadays needs to include an analysis of the number of packets needed by the routing protocol itself, in addition to the efficiency with which "user" packets are routed. A network that always delivers user packets over an optimal path isn't really all that useful if 90% of the network's capacity is consumed by route updates. I was very disappointed to see that this particular paper attempts no such analysis of routing overhead; without it, the paper's conclusions must be regarded as highly suspect.
Re:Trouble: this network topology requires authori (Score:1)
for example a node that newly connects to the network could jump from node to node until it finds a suitable place in the network (you should be able to easily work out this method for a binary tree). It could even (for efficiëncy) ask the network to give it a free spot. Nodes should then jump inwards to accomplish a smaller network.
Pounding a dead monkey (Score:2, Insightful)
Just get over it. Gnutella came, was good, and went. Now there are better things. Kazaa/FastTrack. Distributed Napster. Etc.
Stop trying to add new components to you VW bug, when there are Ferraris to be had.
Hopfrog.
Re:Pounding a dead monkey (Score:1)
Hypercubes and fat trees (Score:2)
Thinking Machines Corp went out of business around 1994.
Reinventing the wheel to be square? (Score:1)
Wrong. (Score:2)
Moving to a higher order of dimensions of course makes the maximum path length shorter; but it also makes the number of edges per vertex increase. Or, in other words, the number of simultaneous connections needed. Most GNet users have already pushed the number of simultaneous connections up to the maximum they can handle, thanks to bandwidth limitations, and are still experiencing the scaling problem.
I'm convinced the ultimate solution is a hybrid between Napster and Gnutella, with most end users connecting up like Napster clients, and a few volunteering to be index nodes, with a GNet-type organization between them.
Re:Wrong. (Score:2)
Hybrid Napster/Gnutellas: now legally impossible in the U.S., and shortly everywhere else in the world. Any volunteer index node owners would be in a world of legal pain, with their homes and future threatened. So sadly, not an option.
Umm... (Score:2)
Point two: Hybrid Napster/Gnutellas would be no more legally impossible than current peer-to-peer systems are now. Every node is involved in relaying search terms around, and is therefore technically involved in contributory copyright infringement. The only reason Napster went down was because its servers were fixed targets. Volunteer-run servers tied together using a Gnutella-like network could spring up and drop out as necessary, and the network would remain up.
All of us users just know it works (Score:2, Insightful)
"Overlooked" beacause it's a CROCK of CRAP (Score:4, Informative)
The Internet has a structure with physical limitations!
What good does it do if your many multiply redundant connections allow transmission of messages with a fewer number of virtual hops, when every connection going out of your college dorm goes over the same physical wire. The number of connections over which a search request must travel is a liability, not an asset, when many of those connections happen to use the same physical wire. The author of this "paper" has conveniently ignored this fact, and his conclusion (that adding virtual links to your network allows you to manufacture bandwidth out of thin air) follows directly.
On a separate topic, the assertion that the virtual connectivity of Gnutella is anything like a Cayley tree is absurd, because it implies no closed paths. Consider: In order to discover and connect to a new a host on the Gnutella network, you need to catch a search request originating from that host. The fact that you were able to recieve that search request in the first place means that there was already a path between you and the remote host--therefore you have created at least one closed cycle by forming the new connection.
Mod This Down (Re:"Overlooked" beacause it's...) (Score:1)
The author fully understands the nature of the internet. He is limiting the scope of his study to "theoretical bandwidth limitations" because that provides an upper bound on scalability regardless of the underlying hardware network structure. Yeah, okay, you won't actually hit that upper bound, but that's not the point. You can't possibly *exceed* the upper bound. That's the point. Earlier papers attempted to prove much lower upper bounds, and this is a (quite good) refutation of them.
Most posters are also confused about the relationship between hardware and software network topologies. Yes, the Internet connection going into your college or whatever has a fixed limited network topology. Yes, that will cause a 20-cube to have less bandwidth utilization than the theoretical limit. But guess what... the current logical tree structure that gnutella uses *also* does not match the underlying hardware structure, and so suffers in the same way.
By discussing the differences between the Cayley tree and the hypercube, we can do things like quantify the *rate* at which each of them suffers due to a bottleneck like this.
This is how science is done. People who ignore the science and just say "I am going to hack some super l33t heuristics into the network so searches work better" are being fools. Any heuristic you implement over a poorly-performing topology is going to work much better on the well-performing topology.
You can install Linux on a 486, or you can install it on a 1GHz Athlon. The slashdotters dissing this paper are essentially saying "Well it's stupid to install Linux on the Athlon, it won't go any faster because you know, those instruction cycle counts for the Athlon are all theoretical and you will never hit that performance in the real world... there are going to be cache misses and FPU stalls and stuff. So let's stay with the 486! Yeah, it's slow, but if we just optimize our apps, they will run faster!" Well you can optimize your apps and fool yourself into thinking you are doing a good job. But if you then turn around and run the newly optimized apps on the Athlon (or put the same amount of effort into optimizing them differently for the different CPU architecture) you will ground the 486 into a pile of rubble.
-N.
don't forget the physical sublayer (Score:1)
Re:"Overlooked" beacause it's a CROCK of CRAP (Score:3, Insightful)
To simplify: If the Gnutella network used a different node addressing (and associated routing) scheme, total aggregate bandwidth of the entire network would be increased, even as the network scaled upwards of a million nodes.
The underlying physical wiring is exactly the same for this method as for the existing Gnutella topology. That's the whole point. By simply changing the way Gnutella nodes connect to one another you can increase the aggregate network efficiency. It isn't manufacturing bandwidth out of thin air, it is making more efficient use of what is already there. Read over sections 4 and 5 again perhaps.
Finally, I have to admit to being somewhat boggled by your last paragraph. Are you suggesting that in order to become part of a network you must be part of the network already? Various routing protocols have solutions for how to dynamically insert a new node into a topology. STP, OSPF and EIGRP have differing methods, not particularly analagous to Gnutella, but useful nonetheless. A 'closed cycle' as you refer to it could also be thought of as a routing loop. It is possible to have multiple paths and not have a routing loop, something routing protocols are generally designed to prevent. What you're talking about here is that you have to have connectivity into a network in order to become a part of the network, which is somewhat obvious. I mean, it does generally work better if you plug it in.
Says nothing about Gnutella's scalability (Score:3, Insightful)
Until Gnutella starts directing searches intelligently - towards nodes which are more likely to have the data being sought, as Freenet does, it will always be an inefficient way to search for data.
huh? (Score:3, Interesting)
Ultimate p2p topology not HyperCube but TimeCube (Score:1)
This paper's math is flawed (Score:5, Insightful)
In layman's terms. If you are a node in one of his example networks, and you're sitting on a DSL connection, does the available bandwidth you contribute to the net change whether you have 20 outbound TCP connections or 2? No. It is constant. The author incorrectly computes the "bandwidth" of these different network topologies like you are stringing a separate DSL to each person you open a TCP connection to.
Available network bandwidth in a peer network like Gnutella is related only to the physical interconnect of the nodes. (i.e. whether they are on an SBC DSL line or sitting a North American OC-3)
The only useful analysis is that which determines the amount of data-transfer required between each node (and all nodes) for common operations when using different topologies. When performing this study, you are looking for the topology which will transfer the smallest amount of data over the smallest number of nodes when performing searches. For a great analysis, see the Gnutella Performance Paper by Ritter [darkridge.com], referenced by the above paper.
Careful analysis will tell you the same thing that common sense does -- that the best architecture involves centralized dedicated servers (supernodes), located on machines with the largest physical bandwidth available. (i.e. eactly what Napster did. )
In order to create an efficient peer network which scales, Gnutella 'merely' needs to 1) order the network by physical topology, 2) identify the nodes with the best combination of physical bandwidth, longevity, and CPU/disk resources, and 3) fully utilize those machines as supernodes.
Good luck. :)
Re:This paper's math is flawed (Score:2)
The problem is that it's not hops in the overlay network that matter; it's hops in the underlying IP network. Your "two or three hops" in the overlay network might actually involve a dozen 33K modem links in the physical network, whereas a "less optimal" five- or six-hop route in a better-constructed overlay network might involve only eight physical nodes and nothing less than a T1 between them.
This mismatch between the overlay and physical networks is precisely what caused the famous Gnutella meltdown, as slow modem links saturated by search traffic effectively went down, leaving the network partitioned into a bunch of tiny little islands. This "hypernet" idea is topologically naive in almost exactly the same way, with predictable results. There's nothing in the proposal to prevent the creation of four-hop routes that make two complete trips around the world; such routes may appear "efficient" to a CEO or marketing guy, but to anyone who actually knows about networks it would clearly be otherwise.
Does anybody have a translation? (Score:2)
Also known as transit-stub (Score:1)
FastTrack on the other hand has a semi-heirarchical structure so it uses less bandwith, with those with the fastest connections doing the routing.
(Plug: View my Honours dissertation by clicking here [n3.net])
Re:Also known as transit-stub (Score:1)
All that aside, FastTrack has not changed at all over the past 8 months, whereas Gnutella has made huge strides. What's more, many of the clients (like LimeWire and Gnucleus) are open source, and Gnutella is an open protocol. We need open standards like this to keep the Internet free (as in free speech).
A bit harsh for dear Neil eh? :) (Score:2)
Neil J. Gunther (author of afore mentioned article)
"was born in Melbourne, Australia. He holds undergraduate degrees in Chemistry and Physics, a Masters Degree in Applied Mathematics (1976) from La Trobe University, Australia, and a Doctorate in Theoretical Physics (1980) from the University of Southampton, England."
Gore
"was born on March 31, 1948, and is the son of former U.S. Senator Albert Gore, Sr. and Pauline Gore. Raised in Carthage, Tennessee, and Washington, D.C., he received a degree in government with honors from Harvard University in 1969. After graduation, he volunteered for enlistment in the U.S. Army and served in Vietnam."
Re:A bit harsh for dear Neil eh? :) (Score:1)
Big deal. I hold a department chair at Old Latrobe University [snopes2.com]
Re:A bit harsh for dear Neil eh? :) (Score:2)
Samizdat filesharing protocol (Score:1, Interesting)
Samizdat is a censor resistant agile network protocol named for the clandestine publication of banned literature in the Soviet Union.
Some americans have asked me if this is just about stealing music from big companies and giving it to your friends.
While this is possible, the real purpose of Samizdat is to share files and messages in a secure, anonymous network.
The right to Free Speech is a part of the Universal Declaration of Human Rights and is also in the constitution of the USA and the Human Rights acts of Canada, the UK, Australia and New Zealand.
Recently, the governments of the free world have tried to remove that right by claiming hacking is terrorism and information is a weapon.
I say "freedom is the freedom to say 2 + 2 = 4" that the following articles from http://www.un.org/Overview/rights.html the declaration states laws like the USA PATRIOT , UK RIP and acts in other countries which allow the secret service or police to read your email are illegal under international law.
Articles 12, 19 & 27 of the universal declaration of human rights http://www.un.org/Overview/rights.html deal with the information rights of everyone.
If anyone tells you filesharing is illegal, then that law is a violation of an internation treaty your country has signed to join the UN.
Re:Samizdat filesharing protocol (Score:1)
Re:Why not use Multicast? (Score:1)