

Closed Gnutella System to Prevent Bandwidth Hogs 251
prostoalex writes: "Salon.com is running a story on Gnutella developers contemplating the creation of a closed or authorization-only system to prevent bandwidth hogging. Turns out, numerous applications, including Xolox and QTraxMax employ quering algorithms that are capable of bringing the network traffic to a halt. While it gets better download speeds for the users of the aforementioned applications, the damage to network traffic as a whole is substantial."
Build a better system (Score:4, Insightful)
The solution is not authentication - it's building better network infrastructure.
Ozone! (Score:3, Interesting)
Ozone [ozone-o3.net] - Available for Linux, Windows, and OS X.
Beryllium's BeShare Server - use "Beryllium.BeShare.Com" inside of Ozone to check it out!
Enjoy
Re:Ozone! (Score:3, Interesting)
Totally indespensible when you have a tough coding problem, and need instant
coding help.
I rely on many friends from the BeOS Community to help me out, and I in turn
do the same for others.
It's what makes us a very friendly bunch, to be sure.
I only wish there were more features in Ozone, but it's open source now...
perhaps someone from the linux community will help us poor souls out?
(hint hint... nudge nudge... there's free chocolate in it for anyone up to
the task... honest!
Seriously though... the entire muscle/beshare system is TONS better than
anything I've ever used elsewhere when it comes to just working, and
connecting with a real community, instead of faceless creatues sucking your
bandwidth to get the latest Britney. (ugh)
Ozone. It's cool.
Muscle. It's even cooler.
You can find more information on Muscle here:
http://www.bebits.com/app/962 [bebits.com]
Definately worth a read.
-Chris Simmons,
Avid BeOS User.
The BeOSJournal
http://www.beosjournal.org
Nice try Beryllium (Score:2)
Re:Ozone! (Score:2)
Anyone else (Score:3, Interesting)
Re:Anyone else (Score:2)
A few thoughts on P2P (Score:5, Interesting)
- the system must reorganise itself automatically based on current
analysis of the nodes available on the network. - the system must have a dynamic trust model, based on "paranoia".
- the trust model must be utilized in combination of other characteristics of each peer(node) to select best population of the nodes as more important servants. Untrusted/neutral nodes are not to be given any crucial tasks. No-one can do anything crucial alone, confirmation for the action must be confirmed from other trusted ones. - All functionality of the network mut be replicable automaticly. Tasks done by any node must be transferrable transparently.
- Weak nodes will not be given any "community work"
- Every node must pass constant quality criteria to be able to perform any actions on the actual network.
Just to mention a few points. In short, anarchy does not work - even in P2P networks. We need a government, but one which is always on move, but still governs population using strict - but adaptive - rules.
Re:A few thoughts on P2P (Score:2)
Re:A few thoughts on P2P (Score:2)
Re:OT: Re:A few thoughts on P2P (Score:2)
I did. And in this case it means that we do have to have hierarchies to make the system work. However, I have to admit that it was a popularistic statement by me, since you can see such a network as Anarchists' dreamland - a society in which everything is decided based on functional voluntarism. However, this is not completely what I meant in that e-mail, I believe that there has to be some fixed hierarchy in it as well.
Re:OT: Re:A few thoughts on P2P (Score:3, Interesting)
Now, run along and play, or we'll have to airdrop you and Chomsky into downtown Gonaives [stratfor.com], and you two can try to explain Bakunin and Kropotkin to the natives, and why an absence of rule is a good thing.
Re:OT: Re:A few thoughts on P2P (Score:4, Funny)
Really. (Score:2)
Whether it's society backing up laws with collective delegated guns, or individuals backing up their own dictates with guns, it's still authority from the barrel of a gun.
Re:OT: Re:A few thoughts on P2P (Score:2)
It's when society itself organizes and delegates people to point guns at other people that anarchy is lost.
Pointing a gun at someone creates a hierarchical relationship and a loss of freedom for the individual staring down the barrel, i.e. not an anarchy.
Organising can certainly occur in an anarchy, and delegation is possible in certain forms.
Anarchism revisited (Score:2)
Anarchy in the sense of lack of a government is a different matter altogether. The brutal governemnt repression in Zimbabwe etc are examples of the exact opposite of that. Though actually present day Somalia is seen by many anarchists as a promising experiment
Many thoughtful articles about Somalia, Iceland and other interesing societies here: Anarchy without chaos. [libertariannation.org]
This is mostly the anarcho-capitalist angle. Not sure where the Kropotkin people are on this.
Re:OT: Re:A few thoughts on P2P (Score:3, Interesting)
So, you need some kind of intelligence gathering agency and millitary force that could detect and prevent a potential outbreak of government. And, of course, you'll need rules for these agencys to follow, so as to protect everyone else from them, and some sort of oversight commity to make shure those rules are followed. And then you'll need a group of people handle the punishment of those who violate the rules, and another set of rules for them to follow to ensure that innocent people are not punished.
You'll also need some meathod of deterring people from lying, stealing, killing, or otherwise abusing eachother. After all, most people aren't very nice. And then you'll need some way of seeing to it that those who do violate the rules of common decency are dealt with, and again, there will need to be a set of rules for how to procede with such matters.
Wow, you were right, anarchy does work. All you have to do is follow these simple guidelines and...
Wait a minute. Oh shit! We've just created a government. Guess we'll have to start over.
Re:A few thoughts on P2P (Score:2)
Re:A few thoughts on P2P (Score:3, Interesting)
"good guys" are easily identified because they stay longer in the channel, thus gaining trust/fame (whatever you call it). But within an almost anonymous P2P-Network there is no central authority (like chanops in IRC who give +v to good guys). I'd really like to see some kind of web of trust in P2P, but making it unforgeable seems difficult to me. Perhaps some kind of micropaymentsystem: For each byte I download from you, I give you 1 digitally signed credit that raises your possibilities (like better search, skipping queues...) But then we need a central signing authority, otherwise people would do multiple accounts and gain lots of credits by "downloading" from their own machine.
The decentralisation of P2P makes it independent from central servers but at the same time it raises the ability to abuse the system.
Re:A few thoughts on P2P (Score:2)
Well, if that central signing authority would consists of say 10 people (machines) in the beginning, they could certify others, which would then again share trust points etc. But then, the barrier for occasional users might make it impossible.
Re:A few thoughts on P2P (Score:2)
My Big Idea is similar yours only there is no global currency only local ones.
Basically every client contains a list of "good guys" locally on the box that the client is installed on. If I download a song from you and the file is complete and is properly encoded, etc. then it marks you in my list as a "good guy" and gives you one point. Conversly if you are the RIAA and are putting up dummy files then you get negative points.
You might be thinking that it'll take you a long time to figure out who are all the good guys and who are the bad guys. To speed up the process have the blacklists and whitelists shared. Once someone gets high enough points on your list, your client asks the good guy for his lists and adds his lists to yours. You might want to prorate lists you recieve from others depending on how many positive points he has.
This system isn't likely to be abused on a large scale since those who abuse the system on a smaller scale will be ignored. There is not single point of failure since the lists are kept by each individual client. The list of known abusers of the network will quickly propogate through the system.
A similar system could be used to guess people's preferences. Everyone could rate songs. People who rate songs similarly to you would recieve positive points (note: seperate point system from the one I discussed above). People who like Britney Spears will get negative points. Now by getting the "recommended" lists from those with similar tastes in music to you, your client could actually recommend songs for you to download.
Re:Emergence (Score:2)
BleeEEP - wrong
Get rid of pop culture vultures! (Score:5, Funny)
The biggest problem with gnutella is not technical. It is that gnutella was invented so that true hardcore underground people such as myself could complete our collection of harcore underground things, such as the entire run of Evangelion. However, gnutella is cluttered with people only interested in Brittney Spears. Here is an idea I first proposed on everything2 [everything2.com] for making gnutella less crowded.
Re:Get rid of pop culture vultures! (Score:2, Interesting)
Re:Get rid of pop culture vultures! (Score:2, Informative)
All is does is piss off dial up users, it doesn't stop them, they just keep searching.
Salon's article on the practice [salon.com]
I think having an enforced standard for the Gnutella protocol is the the sensible way to go. If you're going to design a protocol, do it properly and completely, which includes specifying exactly and clearly what a supernode is and how it should behave. If you don't clearly define every aspect of the protocol then it is going to break down as people interpret it in different ways.
A protocol has to be a set of rules or it isn't a protocol by definition.
Re:Get rid of pop culture vultures! (Score:4, Insightful)
All it requires is for about 100 or so people to put a file in a shared directory called Brittneyspearsbarebreasts.jpg or something along those lines. But instead of said picture actually being of Miss Spears beare breats, why not make it something else...such as possibly goatse.cx?
What is interesting to me is that this would be EXACTLY what freeloaders would do if sharing was required. Just something to think about for people who think they have the freeloader issue figured out. It's a lot more difficult than it seems, since file names and file sizes say nothing about the quality of the content being shared.
Also if current Gnutella clients were simply amended to have the option don't allow people with 0 files in their library to download, how long would it be before a client was produced which falsely reported files in it's library, files which didn't exist and you can never download.
Chicken and egg if they make me share (Score:3, Interesting)
don't allow people with 0 files in their library to download
Then what about one file?
Besides, making the network trade-only leads to a chicken-and-egg problem for new users. How are "honest" users (the ones willing to share) supposed to get into the network in the first place? Where does a new network member get her first audio or video file?
Ripping my own collection works ... to a point (Score:2)
[To enter a network that requires users to share files,] download CDEX (it's free), insert CD, rip to MP3, share folder.
CDex + LAME --r3mix works for my collection of Eminem (who expressed approval of MP3 trading in the lyrics of "The Real Slim Shady"), Nine Inch Nails, Michael Jackson, and Weird Al Yankovic CDs, and recordings of songs that I write and perform, but then how do I get credits for downloading copies of music videos or Japanese animated television series? I don't have both a DVD-ROM drive and a plane ticket to Canada, so I can't rip my own.
Re:Get rid of pop culture vultures! (Score:2, Informative)
Re:Get rid of pop culture vultures! (Score:2)
Re:Get rid of pop culture vultures! (Score:2)
Everyone has just as much right to use the P2P network as you. They can search for whatever they want. Who are you to govern what is a proper use of the tool?
Re:Get rid of pop culture vultures! (Score:2)
Although I am glad to see that it somehow sparked a fight about the validity of Evangelion's ending. Perhaps we can also have a Rei vs. Asuka debate?
Self-policing network (Score:5, Insightful)
Nodes individually keep track of the behavior of their neighbors. Bad or expensive behavior like out-of-spec activity or excessive querying lowers the 'credit' of the node. Good behavior like answering queries increases a node's credit. Credit determines the probability that a node's queries will be answered or passed along and the priority with which they will be treated. Abusively written clients will eventually be ignored out of the network.
possible flaw? (Score:2, Interesting)
Bad or expensive behavior like out-of-spec activity or excessive querying lowers the 'credit' of the node. Good behavior like answering queries increases a node's credit.
How to write an "abusive" client that is still serviced by the rest of the network:
1. Create queries at the request of the user and send them. Re-query frequently to increase search results (a la Xolox) ["karma" decrease]
2. Respond to all queries with an affirmative "I have that file!" message ["karma" increase].
Abusively written clients will not eventually be ignored out of the network. Users of abusive clients will get better search results and clog other clients will false query hits in the process. In the long term, users will have to migrate to abusive clients to be able to get search results thus crushing the network.
I may be wrong - I only have coding and protocol development experience with gnutella servents. Hopefully the good GNUNet developers have come up with an elegant solution to this problem, but it doesn't seem like it on the surface.
Re:possible flaw? (Score:2)
That's already a problem that's been dealt with by Gnutella clients -- you would have servents that would respond positive to any (and all) query requests (typically by appending '.html' to it and returning a redirect to some spam page).
The easy answer is that the newer clients have an option to send out queries for random data every so often -- anything that answers affirmative to those queries gets ignored. Simple and effective.
Problems about UDP (Score:2, Informative)
I hope such problems are fixed now, but older clients will continue to eat my bandwidth. I don't want to make my ISP unhappy by letting lots of useless packets in.
An idea: UL/DL ratios (Score:3, Interesting)
So users won't be able to download without contributing to other user...
Re:An idea: UL/DL ratios (Score:4, Insightful)
Re:An idea: UL/DL ratios (Score:4, Insightful)
That's implicit in ratios, though. Ratios are - by definition - about quantity over quality. As you point out, imposing UL/DL ratios increases noise.
Re:An idea: UL/DL ratios (Score:2)
Quantity Ratio
and
Quality Ratio (Which is determined by a persons grade of the downloads they have from that server)
Re:An idea: UL/DL ratios (Score:2)
So people download loads of crud to reduce their quality DL.
Or they vote up the "quality rating" of stuff they've uploaded.
Of course, this is very client/server oriented, and doesn't translate well to P2P anyway.
Re:An idea: UL/DL ratios (Score:4, Interesting)
You don't have to upload files manually - all you have to do is to share specified amount of traffic before you can download more from other users.
Example: you want to download 600Mb file from other users. Admin server will check your account and verify amount of traffic you allowed to download If you don't have enough traffic stats you have to wait until somebody will download something from you.
Good example is Edonkey protocol: then downloading big file you HAVE to share parts of it in order to finish download.
oh GOD. NO NO NO NO NO NO (Score:2)
You need good administration and tight surveillance of users to make that work as intended.
Neither of which are feasible or good ideas for something intended to be another network layer.
Re: An idea: UL/DL ratios (Score:2)
No. Download ratios are bad. There's no easy way for someone to start getting in such a "closed community" because, at the beginning, you just don't have interesting files to upload. You have two choices: Upload lots of crap (and probably get kicked/banned) or be ethical and just don't upload crap but wait maybe weeks until your friend comes over and gives you CDs to upload.
Another possibility was to visit other networks or BBSes (this is where this ratio stuff started) which don't have ratios, download stuff there and upload it on your ratio net.
But - if such networks exist, why use the ratio ones anyway? On the other hand, you would piss off those who are running the non-ratio net because you were just leeching like hell.
Download ratios actually hurt the whole community very seriously.
Elitist Bastards! (Score:2)
GNUNet (Score:3, Interesting)
P2P and DOS Attacks (Score:3, Informative)
Re:P2P and DOS Attacks (Score:2, Informative)
You're right, though. Most gnutella servent software out there doesn't behave very well.
Re:P2P and DOS Attacks (Score:2)
Re:P2P and DOS Attacks (Score:2)
When I used Windows, I never experienced such a thing with WinMX, but then again, it is much less decentralized.
The solution is to block abusive servents (Score:5, Interesting)
This means you, XoloX. As well as all the other servents which send requeries at ridiculously short intervals, send download requests tens of times per minute trying to force their way into a download slot, support downloading but not sharing, encourage or emphasize web downloading as opposed to participating in the Gnutella network, etc. Freeloaders are as much a problem as they ever were, but (IMO) only because they're being allowed to be such a problem.
The time has come when abusive servents need to be shown the door. I don't mind sharing most of the time. But when the same asshole is hammering me 100 times per minute trying to get a download slot, or sending the same query every 5 seconds trying to find more sources, my desire to share files goes down the toilet. Something needs to be done.
Re:The solution is to block abusive servents (Score:5, Interesting)
The Asia-based Qtraxmax developers see their mission as getting as many software(spyware?) installs as possible, through promising a superior user experience, and they would cheerfully destroy the network to do so.
Obviously, the solution is a new Gnutella option, defaulting to "on", that says "deny resources to abusive clients".
I'd rather see this as an option (Score:5, Insightful)
I think filtering of abusive apps should be done on the client side of the servent equation. The biggest problems I've seen lately don't involve Xolox specifically, but users of varying servents. People who queue up hundreds of different files to download at a time. People using programs which ignore "Not Shared" or "Refused" replies, and continue to pound my box looking for files that don't exist.
I was out of town for a few days last week (all computers turned off, except for my router box). When I came back, I fired up my Gnutella program. Without even connecting to the network, I was immediately serving uploads. That means that someone was trying to download from me for three full days while a) the files were not shared, b) Gnutella wasn't running, and c) the freaking computer wasn't even turned on! Come on, servent authors: pay some attention when you get "Refused" or "Not Shared" responses. Drop such files from the queue after 2 or 3 failed tries, don't leave them sitting there for eternity.
I want a setting that says "drop all packets from hosts who request a no-longer-shared file." I want a setting that says "drop all packets from hosts who attempt to download while the program is running but not connected to the network." I want a setting that says "drop all packets from hosts who send download requests more than $TIMES per minute." My per-user upload limit is set at 1, so someone queueing up 200 files at a time generates an enormous amount of protocol overhead. It might be 5 hours before that user gets all of his 200 files, all the while he's sending a constant barrage of packets which accomplish nothing.
Gnutella is an open network. Yes, we do need to do something about read-only clients, but I think it should be up to the people to decide what gets done. Provide the users with the appropriate filters and let the majority determine what behavior is good vs. bad.
Shaun
Re:I'd rather see this as an option (Score:2)
I have a static IP. I haven't run a gnutella client for, oh about 2 months. I still get gnutella packets bouncing off my firewall at the rate of 4 or 5 a minute. That's insane...
Re: I'd rather see this as an option (Score:2)
How about an option in the protocol that transmits the "per-user limit" value on failed requests? How about clients that react on this value?
Of course, peers that send requests for the same files every few seconds should be blocked. This really hurts bandwidth.
In other news (Score:3, Funny)
You must be kidding (Score:2, Insightful)
While it gets better download speeds for the users of the aforementioned applications, the damage to network traffic as a whole is substantial.
Do you expect the same people who use the network predominantly for breaching copyright to care about the greater good?
silly (Score:4, Insightful)
Do you actually think they copyrights they're breaching have anything to do with the greater good?
Four companies have collectively monopolized music distribution, using copyright. Is this a good thing?
Get real. Record companies are scum. The artist would get more money if I mailed them a quarter, than if I bought the CD. Meanwhile, I would be giving the RIAA more money to keep it illegal to play legally purchased DVDs on my PC. I hope they all go bankrupt. Then we'll have competition.
I'll participate in a free market, but not the current abusive, short-sighted ologoploy. Tell me where I could legally download my 300 favorite CDs for a reasonable fee? I can't. Thankfully record companies don't have a long term business plan. They just keep trying to stifle new technology and get their business model legislated. They should be trying to provide the services people want. That's what they'd be doing in a free market economy. They're trying to tell me what I want. They can bite me.
Re:silly (Score:2)
Never mind Gnutella, this just in.... (Score:3, Funny)
GNL (Score:5, Insightful)
One such proposal, GNL [shambala.net], was to provide a way to define alternate Gnutella networks from the main system, and include ways to limit their behavior. Another proposal, GNV [shambala.net], was a method for administering these networks, and said administration could be performed anonymously.
Many people liked my ideas, until I made the mistake of mentioning that the end result would probably be differentiation of Gnutella into several networks, each specializing in different types of files; it would be like making Gnutella into IRC, with separate server networks providing different flavors of service. I also mentioned that I thought the original Gnutellanet would wither on the vine. They looked on this with horror and dropped my suggestions.
*shrug* I dunno. Considering that, at the time, the Gnutellanet was scaling itself into bloated nonoperation, I thought splitting the Gnet into different specialty networks was a good idea. Clients could even log onto more than one Gnet at a time.
Those who do not learn from history... (Score:5, Interesting)
It's not like this hasn't happened before.
Sun did it with Ethernet. They set their NICs to use the minimum retry interval instead of minimum + random time like the spec says they must. This got better performance for Sun equipment. Right up to the time where someone put a dozen Suns on a single Ethernet segment and the competition between all of them hammered the network down to 10% of the expected bandwidth.
Various TCP/IP "accelerators" tried this too, by ignoring the exponential-backoff and slow-start parts of the TCP spec. They too improved speeds for the people who used them. Right up to the point where lots of people started to use them, when the competition between them hammered their transfer rates down to a fraction of what's expected.
We've seen it on UDP-based streaming protocols, where lack of flow-control mechanisms causes massive congestion problems and slower transfer rates than when flow-control is applied.
So why didn't anyone expect/predict this when they were designing the Gnutella network and protocols?
Re:Those who do not learn from history... (Score:3, Insightful)
After AOL stamped on the writer to remove the program, lots of people reverse engineered the protocol (which was almost trivially easy), and wrote their own clients. Because it was the time of dot-com mania, lots of commercial and semi commercial applications sprung up using the same protocol, without any of the authors ever bothering to consider whether the protocol was usable at all.
It's only now, about 3 years later, that we're finally seeing work to move 'Gnutella' into a more workable system (see the superpeer system of Gnucleus, for example).
It's called "The Tragedy of the Commons" (1833) (Score:5, Insightful)
The problem in general arises when you've set up a situation where if each user acted in both a rational and self-interested way, the system overall would collapse for all the users.
When designing any kind of multi-user system, it's critical to plan for the "what if all the users (or half of them) suddenly got very selfish." What results are things like disk quotas: central-system-enforced limits on individual behavior.
In a system like the gnutella network, where there is no 'central system' to enforce 'community-minded' behavior, the eventual collapse of the system can be predicted as a function of overall population, presuming that there are always a few people who are more selfish than the rest.
Centralized systems like Napster actually had an advantage in that the centralized servers could establish and enforce 'fairness' policies that kept selfish users from triggeringa 'Tragedy of The Commons'.
-Mark
Re:It's called "The Tragedy of the Commons" (1833) (Score:2)
Sounds logical doesn't it? In fact it isn't necessarily so. Consider the internet, the IP infrastructure is P2P in fact; let's apply what you said to it:
"In a system like the internet, where there is no central system to enforce community-minded behaviour; the eventual collapse of the system can be predicted as a function of overall population, presuming that there are always a few people who are more selfish than the rest."
Doesn't sound so obvious anymore does it?
Actually, this is an example of iterated prisoners dilemma; there is no known solution to that in the general case. It all depends critically on the details. However, in the case of Gnutella, I think that Gnutella lacks some features that would have allowed it to weather situations that Kazaa seems to handle very much better.
There's always going to be some leeches. The point is to make sure that the leeches don't gain anything by abusing the mechanisms the network supplies- with Gnutella, and to some extent Kazaa, they do gain... if they end up abusing it too much- the network dies.
Re:It's called "The Tragedy of the Commons" (1833) (Score:2)
And, in fact, we have seen exactly this kind of thing [slashdot.org] kicking in in certain parts of the Internet, like broadband service and pricing. AT&T has started separating out the 'leeches' ("heavy users") from average users, and applying negative feedback (higher prices) to their leeching behavior. Again, you can see how it takes a centralized administration (AT&T) to bring the system back into balance.
So you can either (1) hope that your system never becomes popular, or (2) hope that the denisty of leeches in your population never exceeds a certain 'thermal runaway' threshold, or (3) hope that the very worst leeching behavior doesn't substantially degrade service for everyone else, or you can (4) design the system so that at least one of those is true. Since popularity is desirable in a p2p system, and there are always some leeches, you need to design in limits to how much leeching one user can do -- an interesting problem in an open-source, p2p network.
-Mark
Re:It's called "The Tragedy of the Commons" (1833) (Score:2)
No no. AT&T are very able to control the bandwidth available to anyone on their network, lookup up 'traffic shaping'; it's interesting that they have chosen not to do this. Apart from a few crackers there are no leeches.
The real point is that most people who buy a broadband contract off them don't understand what they have just signed, so when congestion occurs, they start moaning. AT&T aren't going to go "well you shouldn't have signed the contract if you didn't understand it", so they've created this fictitious 'leech' guy who is supposedly stealing all the bandwidth. Then AT&T realised that they could actually make money for bandwidth they had already sold, by charging over a certain download limit- but it's just profiteering, there's no real issue, or atleast not if AT&T are running their network well.
I don't agree with your 4 'hopes'. These do not cover all the options you have in designing these networks. There's no hoping- you design it to have certain properties. If you write the software, you have central control anyway, in your terms. Every node in a P2P network can be a policeman if necessary.
Re:It's called "The Tragedy of the Commons" (1833) (Score:2)
Leeches aren't fictional, and AT&T already knows about traffic shaping. Problem is, traffic shaping throttles your peak or burst bandwidth. For people who don't leech or abuse their connection, it's nice to let them occasionally burst to higher bandwidths. If you apply traffic shaping they won't be able to burst even if it's only 1 time a month for a few tens of megabytes. The billing change AT&T's doing hits leeches for long-term average usage without chopping off bursts for non-abusers.
I like AT&T's approach. Do a single 10-megabyte upload a month, you get full burst rate. Run a file-sharing server transferring at a megabit a second 24x7, you get hit with a big bill and a warning to either curb your transfers or pay full-time for a dedicated chunk of bandwidth.
Re:It's called "The Tragedy of the Commons" (1833) (Score:2)
How many shares do you own? *snicker*
Like the previous poster said, ISPs who gouge their users (not "leeches") for using their unlimited connection are simply profiteering.
The SANE and FAIR thing to do is to use traffic shaping to severely limit the "hogs" rate during peak traffic times so the light users like grandma don't suffer. The more bandwidth you use over time, the less you get to use when it's scarce - but at 3am, even the hog should be able to use his full 2Mbps if it's not being used, because unused bandwidth doesn't cost the ISP anything.
--
Gnutella is the future of the Internet (Score:5, Interesting)
People need to realize that Gnutella is now fastly becoming a big player in the function and value of the Internet.
Gnutella, in my view (and many others), is not a mecca for porn, warez, and MP3's - but a pool where anyone can share any type of file.
A bigger trend now showing up is linking to files on the Gnutella network instead of the common http://site.com/file.zip. How does this benfit you? You get faster downloads by utilizing partial file sharing, swarm downloads, etc. It also benfits servers greatly. They now aren't the only source for the download, because once the file gets onto a Gnutella client, it searches for more peers, and shares the load with them. This can save TREMENDOUS bandwidth.
For example, Linux can link to Linux links as such: magnet:?xt=urn:sha1:(InsertSHA1)&dn=Linux&xs=http
(not an actual correct MAGNET link, but you get the idea)
When someone clicks that, it opens it up in a Gnutella client. It begins downloading from that source, and searching for the same file on the Gnutella network. Through the entire life of the download, it will continue to add sources. You could then be downloading from over 30 people at once, gaining speeds of up to 10MBPS+.
Oh, the power of Gnutella. Can KazAa (FastTrack) do that?! (Well, it can, kind of
Oh, how do you know if that's the correct file? Hashing. Gnutella servents are implamenting hashing now, where each file has it's own hash. So when searching for files, they can swarm you downloads. You are GUARANTEED that all the sources your downloading from are in fact the same file, because they have the same hash (SHa1). That's whats getting the RIAA so scared
Also new on the scene (well, new as in new popularity) is Bitzi [bitzi.com]. Bitzi catologs hashs (bitprints). You can search through their database, and find files with hashes. Click the hashes, and you can download a file. Each file on bitzi has a "Bitzi Ticket" where you can rate the file. You can mark it "Invalid/Misleading" which means it is not the file you want. You can mark them if they contain virus's too. I can almost hear the sweat dripping from the RIAA Lawyers foreheads.
Want to see the future of Gnutella? Check out Shareaza [shareaza.com] (WINE Compatable).
Supports all of what I discussed in this post.
Re:Gnutella is the future of the Internet (Score:2)
It is entirely possible that two different files can have the same hash. SHa1 produces a 160-bit signature. If you have 2^161 unique documents, you are guaranteed to have at least 2^160 duplicate hashes. Hashing algorithms are meant to detect malicious tampering of files, and random errors. They are not meant to guarantee the uniqueness of a file.
Of course 2^161 is around 3x10^48, so the network won't have that many documents for a long, long time. However, the odds of finding a duplicate are much higher than most people would think, ala the birthday paradox (if you have a room of 23 people, there is over a 50% chance that two people have the same birthday). Similarily, if you have a hashing algorithm that maps into one quadrillion unique numbers (50 bits), you need around 40 million documents before the chance of a duplicate exceeds 50% (and 110 million documents before it exceeds 99%). I'm not going to calculate it for 160 bits (with 2 billion documents, the odds of a duplicate are less than 1x10-9, and I'd have to write a new program to go higher than that).
That's whats getting the RIAA so scared:P No longer can they infect files and make them the same file size/file name.
The RIAA can certainly claim that their file has the same size, name, and hash. You won't know for sure until you download the entire file and calculate its hash.
[Slightly OT] Peer-to-peer and web of trust (Score:4, Interesting)
The Keyserver infrastructure is already there, and the apps (like GnuPG) are readily available cross-platform. So why can't p2p clients allow content to be signed, so that you can establish a web of trust as to whose content can and cannot be trusted. Downloading a signature of a file to check it's validity would certainly help reduce the chance of downloading dodgy content. This should be especially useful as you tend to get groups of people who are all interested in the same sorts of files (anime, divx, certain bands, etc), so you could imagine a good web forming fairly rapidly.
Making a valid OpenPGP key is a computationally intensive task, suggesting that few people would make thousands of them on the possibility they would be blacklisted. They also don't require any form of real identification, making them effectively anonymous. Also gaining a good trust metric would be an incentive to keep the same key, especially if downloading was restricted based on your trustability.
I can't think of any good reason that this couldn't be worked into an existing p2p network. Whether it would work in practice I have no idea. Anyone who knows more about this than me care to comment? Anyone done it already?
Re:[Slightly OT] Peer-to-peer and web of trust (Score:2)
The advogato trust metric [advogato.org] and slashdot's moderation system are the most prominent implementations that try to solve the problem of peer based trust. It clearly needs more research.
Making identity generation difficult (Score:2)
> would make thousands of them on the possibility they would be blacklisted. They also don't
> require any form of real identification, making them effectively anonymous. Also gaining a
> good trust metric would be an incentive to keep the same key, especially if downloading
> was restricted based on your trustability.
I did a project that concentrated essentially on what you say here -- making key (identity) generation difficult. It's easy to make RSA keys (for instance) quickly if you don't care about security (and also difficult to independently verify that the key is "valid"), but I give a way to provide a token along with the key that's independently verifiable and difficult to create. This token can also "grow" in strength over time. Check out the paper here:
http://www-2.cs.cmu.edu/~tom7/papers/peer.pdf
We don't talk much about creating a "web of trust" kind of thing, but do talk about "cold hard evidence" of cheating. The next step is to see what other kinds of misbehavior can be audited (and how someone can provide proof of infraction), for instance, sending out too many flood messages onto the network.
Why people close systems... (Score:2, Insightful)
If the cable/dsl providers were mostly selling symmetric rather than asymmetric services, I'd bet that those same users would be much less likely to restrict access. Furthermore, I think the providers are well aware of that, so don't expect symmetric service to become common anytime soon.
An obvious solution (Score:2, Interesting)
Unfortunately only Shareaza ( www.shareaza.com ), and, IIRC, Bearshare, have implemented file queueing. It's like giving out a paper ticket at the deli, instead of asking the person behind the counter every 5 seconds if they're ready for you, you can just ask them at normal intervals (60 sec default for shareaza), because your spot in line is guaranteed, and there's no real advantage in asking more often.
Big, bad hash DB? (Score:3, Interesting)
Would this be feasible at all, do you think? It would be an additional p2p distributed network (we gotta make sure the DB is accurate and relatively synchronized, so we can't give direct, universal write access). I'm thinking that you open a socket to the server, and just keep sending requests as you search for files, and as you open files. This way, we would also be able to blacklist files we don't want distributed, blocking those from being returned by the initial search.
You think the RIAA guy monitoring this discussion just choked?
Read up a bit. (Score:2)
I *believe* it was called Bitzi.
Re:Read up a bit. (Score:2)
However, it seems to be built around a company. That is bad news. This sort of service should be based on peer-to-peer technology, and should not be owned by someone who can be sued. There are of course problems involved in maintaining such a database within a p2p network (collision management, etc).
Unrelated : If a law enforcement official finds a piece of kiddie pr0n, they could use such a service to find others with the same piece under a different name. On the flip side, the Chinese government would use the technology to track down dissidents who share subversive literature by renaming the files.
Databases aren't the solution... (Score:2)
One basic problem with relying on hashing for the identification of files is that a malicious user can still send you a file, telling you it has the right hash, and you won't be able to check until you receive the whole thing. (Or you won't be able to check at all if you download only part of the file from them!)
Where's the party? (Score:3, Funny)
[pause]
Now if only I could find out where those elitist bastards are hiding! :-)
What YOU can do to help out the community. (Score:2, Interesting)
This is the bare minimum you should be doing if you care about/use p2p networks. If you're not willing to do this, stop downloading. Seriously. If you want to do more, there's a lot to be done.
Need a link? Check here [gnucleus.net]. It's a great client if you're windows-bound, it's open source, and it has a lively discussion forum.
how do I share files on dial-up? (Score:2)
I want to help, but I've run into snags:
See a new client? Check it out.
I don't like blue screens, I don't like spyware, I don't know how to use CVS, and I don't have the second hard disk to hold a Linux installation. (My current hard disk already dual-boots winme and win2k, and FIPS can't shorten an NTFS partition.) Besides, some of the apps let a server administrator kick off any user who connects to the Internet at ISDN data rate or slower.
Share files.
I share as much as I am able, but if I share files, I will cut off the person downloading from me when I go offline. Because of how I connect to the Internet, whenever somebody else in the household wants to make a voice telephone call, I have to disconnect from the Internet.
Need a link? Check here [gnucleus.net].
Gnucleus is a Gnutella client. I've read rumors that the design of the Gnutella network is not very compatible with connections slower than 64 kbps, which unfortunately is the fastest connection that many users in many geographical areas can afford. To get a faster connection would require either upwards of $500 per month for a T1 or $200,000 to move house. Is it true that Gnucleus will not work well over dial-up?
GNUnet, Direct Connect, and Sonny Bono (Score:2)
I don't know of a p2p network that doesn't have a win32 client of some kind.
Somebody wrote comments in reply to this article, pleading for more testers of GNUnet and giFT, neither of which is "ready" enough to release Windows binaries, or even a source tarball that will compile properly under MinGW.
In a true p2p system, and user can kick any other user from their own server.
I was specifically referring to the policies of many Direct Connect hubs.
In other words, if someone has a bunch of files you want, download them one at a time.
I already do that, using software such as WinMX that supports a local queue.
[they'll] download when you're available [or if not] resume the file from other hosts.
And if I'm not available often (I only get 150 hours per month on my dial-up plan), then I feel like I'm cheating people who try to download rare stuff from me when I cut them off.
And what about recordings of my own performance? I'm a musician, but I suck at vocals so I just record instrumental music. How do I make those available on a P2P network? I can't use the "legit" solutions (Vivendi's MP3.com or Bertelsmann's New Napster) because they ask me to verify that nobody has already "taken" [baen.com] the melodies that I use in my compositions, and I don't know how to do that. Any pointers?
No. Using gnucleus on a modem is not a problem.
Is using gnucleus and WinMX on the same modem a problem?
Inhernetly N-P Incomplete Problem (Score:2, Insightful)
You want a system without a central authority that can be shut down, so you create a peer-to-peer system.
The peer-to-peer system pretends to be a virtual network over a real network using point-to-point links to establish proximity relationships between sets of peers, mostly ignoring physical proximity and bandwidth constraints.
In order to force the proximity issue and address the bandwidth scaling issues, you invent a concept of "super nodes", which end up being self-selected.
In order to get better performance for themselves, people play "the prisoners dilemma", and rat everyone else out with clients that gang up on requests to ensure disproportionately favorable service.
In order to lock out these clients, you create a central authority, but try to make it decentralized (e.g. "karma", voting, self-regulation, etc.) to maintain the original design goals.
But there are too many strategies to use to attack this. The current "attacks" are taking the form of over-requesting to the point of denial of service... and these are people not intent on destroying the network.
Say you figure out a way to create forced altruism for requests... the node equivalent of the GPL on source code, when you can't enforce the GPL. The natural reaction will be to move on to the next "attack": the "bad guys" pretend they are multiple nodes by avoiding intersecting connectivity with peers, so that dual adjacency won't give them away, and let them be countered.
So you move to a different protocol for "super nodes"; you counter the next obvious attack ("pretend to be a super node") by locking down binaries ("blessed binaries").
But the next attack is to modify the kernel that is running the blessed binaries, and defeat the attack that way (a common "borg" attack on the "blessed binary" NetTrek clients).
Now take active attacks. "Automatic Karma" can deal with dummy files -- "poisoning"... at least until they start intermixing bad with good. But it can't deal with the other issues, without a client lock-down. At which point, you lose repudiability (original design goal out the window: legal attacks work again).
The only real way to deal with this is to define a new protocol that is not virtual point-to-point linked.
And that can be blocked at the routers, unless all other content moves to the same protocol, so it can't be discriminated against.
The only way you are going to be able to create a "blacknet" is to actually create a "blacknet".
-- Terry
Re:Inhernetly N-P Incomplete Problem (Score:4, Interesting)
Actually, you mostly don't want to ignore these constraints. The P2P should make use of closer servers (mostly, but not exclusively).
In order to get better performance for themselves, people play "the prisoners dilemma", and rat everyone else out with clients that gang up on requests to ensure disproportionately favorable service.
I don't see that this is necessarily a real issue. After all the server that has the file you want can keep a queue of requestors, and serve it in strict first come, first served order. 'Take a ticket and sit down over there.' It works. Asking more than once doesn't get you anywhere; and may even get you lower down the list.
The only real way to deal with this is to define a new protocol that is not virtual point-to-point linked.
Unclear. Very unclear.
Now take active attacks. "Automatic Karma" can deal with dummy files -- "poisoning"... at least until they start intermixing bad with good.
Yes, but users can usually play files before they've finished and cryptographic hashing of file contents can preclude people spoofing files, even when downloaded from multiple servers simultaneously.
Re:Inhernetly N-P Incomplete Problem (Score:2, Informative)
Re:Inherently N-P Incomplete Problem (Score:2)
The GNUtella architecture is broken by design, for the goals it wants to achieve.
Lack of a choke-point, which was the real design goal for the system: "a napster that can't be shut down by a record company", means that you can't rely on voluntary compliance with social norms, particularly when one of the most effective attacks is non-compliance. Adding security adds non-repudiation, which adds back a legal hand-hold to act as a choke-point.
You're screwed if you enforce norms, and you're screwed if you don't.
The GNUNet architecture is somewhat similarly broken (in that it can be censored by ch router blocking), but it's at least a step in cheap right direction for solving that problem.
It's only if the Internet itself gets away from protocols subject to transparent proxy that end-to-end guarantees can be maintained. For that to happen, it has to be impossible to distiguish between traffic on the basis of content.
Any other approach, and the traffic will be able to be filtered through intentional failure to propagate.
The only way you can win is to make it too expensive: if it means shutting down the Internet for RIAA to get it's way, that will never happen, but anything short of that is probably doable. So you have to make it so they have to shut down the Internet to stop you.
I guess I'm saying that they are attacking the problem at the wrong level because it's tractable at the point they are trying to attack it... like looking for your contact lens under the streetlight instead of in the alley where you lost it, because the light's better.
Hence "Inherently N-P Incomplete".
-- Terry
Re:Inhernetly N-P Incomplete Problem (Score:2)
Lies? (Score:2, Interesting)
Why does he claim that Shareaza allows limitless numbers of supernodes? Shareaza DOES NOT support more than 10. You can enter any number in Shareaza options, but anything over 10 gets dropped.
Is he just misinformed on this issue? Or is he just jealous that Shareaza has a better app and he is losing market share to them?
If you want to improve bandwidth.... (Score:2)
I know there are settings that can be set but most people don't.
I access a web page, it down loads it to my system.
I want to printthe same page, it downloads it again.
I want to save the same page and again it downloads it.
And what of radio over the net?
I got dial up at what is suppose to be 56k (earthlink) but they
only give me at best 28.8
And I believe I helped finance free cable boxes for other earthlink
customers
SO what's the deal......with this concern over bandwidth????
Seems pretty clear to me that my ISP might give me more bandwidth
and speed if other things I have no control over were better delt
with, even spam mail accounts for more mail then I get otherwise.
Freeloading? That's Always How It Will Work (Score:3, Interesting)
The ratio of users who have useful, desireable files to share to users who do not will always be low, perhaps 1 to 10 or 1 to 100. This is because the "freeloaders" cannot and do not have files to share until the get them from someone else. They will continue to be non-sharing nodes until such time as the sharers with desireable files open up the portcullis.
The point of the system is filesharing: Why impose restrictions on its primary function? The way to stop "freeloading" is not to restrict downloads, but to *increase* them. The closer to the unachievable equilibrium we come, the less "freeloading" there will be.
Why not have a blocker on the clients? (Score:3, Interesting)
I am not sure exactly how the Gnutella protocol works, but if every valid client had this blocker, then these "super-nodes" would not be able to get any information in or out.
Basically, the idea would be that when one of the malicious nodes starts to send multiple queries to another node with this blocking code. The other node would determine whether or not this is legit. If it is not legit, that node will be blocked. Eventually, a "fence" would be put up around the offending nodes, and the damage they cause would be limited to non-standard clients.
As well, it may be prudent to make the block last for a specific time period. Perhaps even add the ability to pass the offending node addresses to other clients so they block as well.
If the gnutella protocol allows this. It would be the most effective way of preventing malicious clients because as soon as they threaten the infrastructure, they are blocked off.
Opens them up for legal issues (Score:2, Informative)
Good bye Gnutella..
Yes something has to be done to clean up the bandwidth, but i dont think THIS is it..
this may be a little paranoid, but... (Score:3, Interesting)
Who would be behind such an attack? There are many possibilities. The recording industry is definately one of them. There could be others. Who knows.
The point is you should all be careful what you install on your computer or even download. Millions of people around the world know how to program at varying levels of control over many different kinds of computers with different purposes. It's like the Force - some use it for good, some don't. There's bound to be at least a couple who are going to write a full-fledged application that is really just one big worm.
Use a different kind of discovery mechanism (Score:2)
There are a number of alternative discovery mechanisms which do not suffere from these kinds of architectural problems.
For example, NeuroGrid [neurogrid.net] and alpine [cubicmetercrystal.com] both use social discovery and peer profiling to prevent bandwidth hogging or query spamming.
There are also hybrid network [sourceforge.net] that use super peers like the Kazaa and Grokster clients.
There is only so much you can do to improve a flooding broadcast architecture. Gnutella will always have some kind of bandwidth and query problems no matter how optimized the clients become.
One Little Problem (Score:3, Insightful)
BlackGriffen
Please, keep downloading as much as you can! (Score:2)
Re:What?! (Score:4, Insightful)
In actuality, gnutella doesn't paralell any serious anarchist philosophy that I have seen very well at all. Most such systems that I have seen proposed generally call for communities of people that work together for benefit of the community and are run by a direct democracy rather than a representative democracy.
In fact anarchy doesn't advocate a state of chaos or lack of laws as much as a lack of hierarchy. It calls for elimination of the concept of "positions of power" where the laws of the land are decided directly by the people themselves and where no person is forced to live by those rules except as the voluntarily accepted price of living within a given community.
gnutella on the other hand is more of a "free for all". More of an "frontier", which isn't very anarchistic at all, as hierarchy is easily created in the frontier, all it takes is a small gang or some guns. Whoever has the most ability to weild deadly power is the top of the hierarchy.
-Steve
Re:AI (Score:2)
> changing IP addresses.
And what's to prevent the RIAA from generating a zillion keys and flooding the network with crap?