Scientific American Article: Internet-Spanning OS 236
Hell O'World writes: "Interesting article on Scientific American outlining what they call an Internet-scale operating system (ISOS). 'The Internet-resource paradigm can increase the bounds of what is possible (such as higher speeds or larger data sets) for some applications, whereas for others it can lower the cost.'"
Hmmmm (Score:3, Funny)
Larger scale Seti@Home? (Score:2, Interesting)
I'll stick with the current setup (Score:2, Interesting)
Re:I'll stick with the current setup (Score:1)
Re:I'll stick with the current setup (Score:1)
Re:I'll stick with the current setup (Score:1)
Re:I'll stick with the current setup (Score:1)
If I had control over who used my resources, I am sure I would share some of it with some people (or entities rather). I'd rather have SETI using my computer's power rather than someone who wants to watch a movie or play games, etc.
Also, if I buy a top of the line computer, why should I spend more when others can go out and get cheap ones and use my computing power?
Re:I'll stick with the current setup (Score:1)
Yeah, same thing with the road! umm...
Re:I'll stick with the current setup (Score:1)
Re:I'll stick with the current setup (Score:2)
What you said included the idea that someone else using your computer's idle CPU cycles will reduce it's lifetime. This is a rather foolish notion if you leave your computer on all the time anyhow, and so the analogy makes sense. If you do turn your computer on and off frequently in hopes of extending it's lifetime, you might want to consider the argument that leaving the computer on full-time is less stressful to it than constantly flicking the power switch on. I think the downsides of the two choices pretty much balance each other out.
Now, if you want compensation for your electricity for leaving it on all of the time, the article states that you're going to get some small monetary amount for all of that unused processing power.
Re:I'll stick with the current setup (Score:1)
Re:I'll stick with the current setup (Score:1)
So long as you have to opt-in to enable this, I don't see a problem letting others use my idle cpu time. It actually makes me happy when I see it happening. Mod me as freak-1, but personally, I'd love to see a seti@home/distributed.net type thing that would allow downloadable tasks so that the client would not be limited to just doing crypto or statistical analysis. Sure, the security would be a bitch. There would have to be a responsible group or something that would validate programs before releasing them to prevent virus mayhem, or worse. But, how nice would it be for researchers around the world if they could have cheap access to vast amounts of cpu time?
Or, am I just high.
Efficacy (Score:1, Interesting)
Re:Efficacy (Score:1)
I dont mind using SetiAtHome or something similar, hell, i'm helping out...but i dont want to _have_ to share my resources with Joe Public, if i wanted that, i'd be a communist
....and as i read once "In capitalism, man exploits man, in communism, its the other way around."
Is this really a good thing? (Score:2, Funny)
Nevermind.
Re:Is this really a good thing? (Score:1)
I actually initially read "spamming" in the header, and I wondered... isn't sendmail relatively platform-independent these days?
If it could be trusted to be used ethically, this is one GOOD application for an "auto-patch download" feature ala Win XP, be able to toggle someone's open relay crap... unfortunately, that kind of power opens up all kinds of sticky wormcans that I don't wanna think about right now.
Plus the number of places that need (for good reason) to sandbox test a patch/change before rolling into production...
-l
Re:Is this really a good thing? (Score:1)
Modem users beware... (Score:1, Interesting)
Eh, enough trolling. I seriously hope this isn't some pathetic
Re:Modem users beware... (Score:2, Informative)
Re:Modem users beware... (Score:2)
And how is that a GOOD thing?
If I need a gig of space, I throw out a gig of crap.
If I am out of crap, I can spend $50 on an extra hard drive. Or $0.20 on a CD-R.
The only way to make distributed storage appealing is to make it so vast that nothing I can reasonably buy will compare with it, and that seems unlikely. And if it DID happen, I'd need a fat pipe to match.
In the end, I want to keep my computer to myself, except for the http server I run.
2000 56k modems delivering data to 1 56k modem? (Score:2, Interesting)
I suppose that's great and all, but what if Mary is on a 56k modem? Doesn't really help all that much. I do understand the point they're making though.
I love... (Score:1)
This is all very cute -- but some of it is laughable. The rest of it, decoding DNA sequences, sharing movies (the binary, not the decompression of) -- it already exists.
In short: Big deal.
Re:I love... (Score:2)
Totally uncompressed video is FUCKING HUGE. Basically imagine the size of a bitmap the same resolution + bpp of the video, then multiply that size by 30*seconds of video (for 30 fps video, which is pretty standard I think).
So she could decompress it, and then if she wanted to send it to this Finnish guy there would either have to be a T3 or so between them...
He was probably just watching some porno anyway.
Hmm...M$ license issues? Just a thought (Score:2, Interesting)
Kickstart
Seen before (Score:4, Funny)
Wow, 3 years on Slashdot and this is the first time I've caught a duplicate story before anyone else. What do I win? :) A free Kuro5hin.org account? :)
Re:Seen before (Score:1)
slashdot subscriptions (Score:2, Funny)
Re:slashdot subscriptions (Score:1, Flamebait)
Just like Enron paid for a president
And both have similar comments!! (Score:1)
This sig is a virus, take it and use it.
database or grep (Score:1)
just have something the scans the post for "a href" tags and put it in the list or table and check to see if it already exists. whenever there is a hit, someone should take a few seconds to see if it is a duplicate post...
There are two types of people out there...... (Score:1)
Group 2>>Those who don't.
Now, for the people in group 1, they are already using something similar to an ISOS, only they are dedicating their computer to something they deem worthy--and I don't think a woman watching a movie in Helsinki is worthy..
Group 2 chooses not do devote their spare cycles for some reason. There are many reasons, but for some people, it is paranoia (of other's data on their computer). To take it a step further, to the ISOS--it's one thing to be looking at nekkid pix of your girlfriend on YOUR hard drive...but what if it was actually being stored on someone's computer in Orem, Utah (which raises some interesting jurisdiction and local ordinance laws)...nekkid pix, mp3s, divx movies of Hilary Rosen, whatever....(of course, your mp3s of metallica's music might partly be stored on Lars' computer or something....wouldn't that be a hoot)
Re:There are two types of people out there...... (Score:4, Interesting)
Privacy is very important but can certainly be worked out. For one thing, data could be stored in "bit stripes" so that each byte of your data is split into 8 separate streams but stored in more than 8 foreign hosts for redundancy and availability reasons. In that way no one could reconstruct any portion of your data from fragments on their drive and no laws could be broken by storing chains of bits.
Also private and public space could be partitioned off so that things you want kept on your system would stay there and only data associated with your weather predicting program would get stored on the ISOS. And quotas would need to be enforced so that if you donate 100GB to the ISOS storage then you may store, say 30GB (due to redundancy) in the distributed system yourself.
And perhaps your CPU's MIPS rating and uptime could be tracked to keep things fair. Then it would be almost like your computer storing up its processor cycles and getting them back all at once when you have a job to run. Grid computing makes sense and a World Wide Grid could make sense if it is feasible and the logistics could be worked out. Imagine everyone everywhere having the power of a supercomputer at their disposal.
Re:There are two types of people out there...... (Score:1)
Oh my God!! You want to put thousands of computers to factoring a large prime ?
Defrag (Score:1)
Re:Defrag (Score:2)
Internet Spamming OS? (Score:1, Redundant)
Re:Internet Spamming OS? (Score:2)
great idea (Score:1)
for starters, someone shoot the guy that said 'it's called windows.'
Anyhow, on a more realistic note, This is an excellent idea. I've often wondered why clustering is limited to computers owned by one individual or organization, why not a worldwide, scalable, cluster. I guess the biggest concerns are security (who gets to see my data, who gets to copy my data, who can put data on my machine, who can execute code on my machine?) In a utopian society this would be easily resolved with trust. Fact is, if everyone uses the same setup, eventualy, someone will find a way to exploit it, I forsee alot of problems with designing a working, usable ISOS. However, there may be a simpler solution w/ similar if not same results. Why start at the OS level? why not a platform independant application with a lightweight encryption algorythym, redundancy would be a must (if someone kills there computer while it's working on your data there should be several backups to failover to). Also, more importantly, selectivity of what processes, files, etc... get migrated, and what ones don't. I'm no developer, so I'm sure I've made many errors in this reply, but it's just my opinion, I'd love to hear others.
Blocsync
Sense of relief (Score:1)
Phew!
Re:Sense of relief (Score:2)
So instead of companies selling their email lists, (Score:1)
Re:So instead of companies selling their email lis (Score:2)
Sounds like Freenet II (Score:1, Informative)
Guess there is nothing new under the sun.
Re:Sounds like Freenet II (Score:2, Informative)
We've seen this... (Score:1)
--Dave Storrs
And what about the bandwidth? (Score:4, Insightful)
"Its disk contains, in addition to Mary's own files, encrypted fragments of thousands of other files. Occasionally one of these fragments is read and transmitted; it's part of a movie that someone is watching in Helsinki."
I wonder how upset this individual in Helsinki would be if Mary decided to format her hard disk in the midst of his movie... Oh, but you say that the same information is distributed on other workstations as a redundancy precaution. I wonder how much bandwidth that cost to prevent this 'just in case' scenario?
While I can certainly appreciate the added value of distributed processing power and multilocational data sources, exactly how is having these massive amounts of data running over the net affecting bandwidth availability?
In my opinion, the lack of a truly distributed ISOS is a bit trivial until we achieve a higher grade of internet connectivity for everyone!
Re:And what about the bandwidth? (Score:3, Interesting)
The Helsinki user is no worse off in this scenario than if Mary's machine were a web server.
We all know that such "just in cases" do actually occur. The only solution to data-loss is redundant copies of the data, maintained either manually (explicit backups) or automatically (transparent mirroring or replication). The authors' idea is to go for automatic replication, and once you have that you might as well use the replicas to improve performance by allowing them to serve data to nearby nodes. This can actually result in less overall bandwidth than traditional approaches, because each node is going somewhere relatively close to get data instead of choking up a central server.
That actually highlights a flaw in the example as given in the article. It would be quite abnormal for someone in Helsinki to be going half-way around the world to get the data, because there should be a nearer replica. It would be more accurate, though perhaps less compelling, to say that Mary's machine was being used as a "staging area" for other local users watching the same movie from Helsinki that Mary just watched ten minutes ago. That would IMO convey the idea of an ISOS (actually the data-store part of it) actually reducing network bandwidth while also improving robustness.
You're talking about FreeNet. (Score:2)
Replication of data has tremendous cost: bandwidth, time, and storage space. Its retrieval is also non-trivial. Local data is by far more manageable and secure, so much so that a fully distributed system just doesn't make sense. What does make sense is that people would prefer to carry their data with them.
Consider instead, a bootable business card CD burned with your favorite OS, and a key-sized multi-gig USB memory drive. Constrained to something that will fit in your pocket very comfortably, or even in a normal sized wallet, you can have everything the way you want it, anywhere you go. No need to add the complexity of network distribution at all.
Too often, visionaries put faith in a silver bullet to cure all ails. I prefer simple solutions to solve individual problems effectively.
Re:You're talking about FreeNet. (Score:2)
No, I am most emphatically not talking about Freenet. For one thing Freenet is not a filesystem. I can't mount it, I can't use plain old read/write (have to use Freenet-specific APIs), I can't use memory mapping (or even access individual blocks without reading the whole file), I can't execute images off it, there are no directories, no permissions, no data consistency. It flunks just about every test of whether something is a filesystem. Worse, Freenet drops data that's not being actively requested; that's OK for what Freenet was designed to do, but totally unacceptable for a filesystem. Got it? Good. Now we can move on.
Replication also has tremendous benefits, most notably robustness and performance. I alluded to the latter in my last post. If nodes are smart enough to get data from the nearest replica, then total bandwidth use goes down. The more replicas there are, the fewer network resources each replica-served request will consume (unless somebody's so stupid that they put all the replicas right next to one another). It's the same principle used by FTP mirrors and content-distribution networks, and it works.
...until you, or you plus multiple other people, need to access that same data from multiple places - perhaps concurrently. Then you get right into the same sort of replication/consistency mess you were trying to avoid, except that instead of having the attendant problems solved once for everyone using the filesystem each person has to solve it separately.
Actually I'd rather not have one more physical object to carry around, drop/damage/misplace, etc., or have to remember to copy the data I want for a business presentation onto my portable device. What I'd prefer would be that when Imove to a new location connecting to the network also connects me to my files, wherever they may be, without unnecessary compromises in performance or security. Those of limited imagination might not believe it, but that will be possible quite soon.
Aside from the administrative-inconvenience issues noted above, where are you going to find such a device? How much will it cost, compared to the software-solution cost of zero dollars? How fast will it be? How reliable? What will you do when it breaks and you didn't make a backup?
The complexity of network distribution should be hidden from the user anyway. The whole idea of a distributed data store is that the complexity is hidden in the system so that users' lives are simpler. What you're proposing is to "shield" users from complexity that they wouldn't see anyway, and leave them responsible for decisions (replication, data placement, backup) that the system should be handling for them. That's not a positive tradeoff.
So do non-visionaries, and the word you're looking for is "ills" not "ails". Your silver USB bullet doesn't solve anything.
Re:You're talking about FreeNet. (Score:2)
Show some class. Treat people you don't know with some measure of respect, particularly if you disagree with them.
Freenet is not a filesystem. I can't mount it, I can't use plain old read/write [...] there are no directories, no permissions, no data consistency
It's not a file system because it daemons haven't been written to make it appear so. You could write specific applications that talk directly with NFS, but nobody does it. You're wrong about the last three points, though. It does have encrypted shared private namespaces, where people would have to have your public key to read the files. That's rudimentary file permissions for read. You also cannot publish to that directory unless you use the private key, which is rudimentary file permissions for write. No data consistency? I'm not sure what you mean here, since it's checksumed and encrypted and passed around in pieces all over the place, it seems very self-consistent. Perhaps you should read up on it. Just because you don't have to supply a password and username doesn't mean there's no permissions. It's done the only way a truly P2P system can be done without a centralized authentication system can be done. Anything else puts all your eggs in one basket. That single point of failure boots reliability out the window.
Replication also has tremendous benefits
Agreed. But only for certain types of data that can take advantage of it. How does it improve the file which is only used in one place, by one person, when sitting at a specific computer? It doesn't. Replication wastes resources in this case. Taking that choice away from users is a step in the wrong direction, then.
Again, agreed. However, there is an identifiable subset of data that needs this treatment. NFS and VPN handles this quite nicely. The hard part is setting a random machine up to access the files. Hence the bootable CD configured to do so.
The complexity of network distribution should be hidden from the user anyway. The whole idea of a distributed data store is that the complexity is hidden in the system so that users' lives are simpler.
Complexity exists and has resultant issues, whether the user directly interacts with the it or not. Due to the distributed nature of a purely networked file system it's always possible that a critical file is unavailable due to any number of errors along the way. So what use is a uniform filesystem where ALL files can be missing or available at the whim of a 3rd party? A blend of traditional with the ability to mount network-shared data is a much better fit.
where are you going to find such a device? How fast will it be? How reliable?
Don't you read
How much will it cost, compared to the software-solution cost of zero dollars?
Uh, nothing is free. There's a current bandwidth and time cost for retrieval that is quite high. Adding software cannot remove that burden--it can only mask the entropy in a system, not reduce it.
For what it's worth, people around here say 'ails'.
Re:You're talking about FreeNet. (Score:2)
Perhaps I was overly curt with you earlier. I just get really tired of hearing "you mean Freenet" any time distributed storage is discussed. How would you like it if somebody said "you mean Windows" every time you mentioned operating systems, no matter how un-Windows-like the proposed operating system was? How "classy" would you be in correcting such a statement? Would you, perhaps, call it horseshit [slashdot.org]?
Private namespaces are not the same as directories, and the rudimentary access control they offer is in no way comparable to the sorts of permissions that any legitimate filesystem on any modern general-purpose OS is expected to support.
"Consistency" (a.k.a. coherency) has a very specific and generally well-understood meaning in this context, which you should learn before you start spouting off about whether Freenet exhibits it. In a consistent system, if node A writes to a location and then node B reads it, B will (assuming no other writes in the interim) receive the value A wrote and not some older "stale" value. There are varying levels and types of consistency, representing different guarantees about the conditions under which the system guarantees that B will get current data, but Freenet does not ensure consistency according to even the loosest definitions.
You're apparently not considering the advantage of not losing data if that one computer fails. Some people would certainly consider that advantage to be considerable.
In any case, I don't think I ever said that all data should be placed in the distributed data store. In fact, I rather distinctly remember saying the exact opposite. Modern operating systems permit the use of multiple filesystem types concurrently, so there's nothing keeping you from keeping data local if you so choose.
$900/GB? And you're seriously comparing that to a software-only solution that might carry zero dollar cost? Do you really think your silver USB bullet is the ideal solution for everyone, i.e. that there aren't plenty of people who would be better served by the distributed-storage alternative?
Re:You're talking about FreeNet. (Score:2)
Private namespaces are not the same as directories, and the rudimentary access control they offer is in no way comparable to the sorts of permissions that any legitimate filesystem on any modern general-purpose OS is expected to support.
Then you're proposing this 'shared drive network of computers' have a central server. There's no alternative which offers authorization for users, which allows proper access controls. I don't deny that the permission model for FreeNet isn't exactly standard OS fare. I specifically made that distinction, in fact. But I absolutely defy you to come up with a pure P2P way to do it with identical security to modern OSes without a central authority. The article did not appear to be promoting someone running servers to authenticate users, so my assertion is entirely appropriate.
but Freenet does not ensure consistency according to even the loosest definitions.
FreeNet has (mostly) the same properties as a WORM drive's file system. Once written, data cannot be changed. Someone could very well write a file system driver that makes such access possible, and FreeNet would appear to the user similarly as a cdrom drive that they can write to. Isn't ISO9660 a coherent model?
Good point about hardware crashes, by the way. I overlooked that. And the reason I brought up the USB drive is in the fairly near future, the prices will be very, very reasonable. I don't think any of the solutions we're discussing will be feasible immediately, so looking a few years in the future is appropriate for a basis of comparison. And still, software is not zero cost. Perhaps from the user's perspective, but it has great cost for the infrastructure.
Re:You're talking about FreeNet. (Score:2)
I really wish you'd stop telling me what I'm saying. I'm not talking about central servers now any more than I was talking about Freenet earlier.
"Pure" P2P? Identical security? That's commonly referred to as moving the goalposts [colorado.edu] and it should be beneath you. It's not necessary to describe any X that meets a standard to prove that Y does not.
You obviously don't think it's possible to reconcile strong access control with decentralization. That's fine, but don't you think it's a little disrespectful to assume that other people who've spent a lot more time than you studying the problem have given up too. You're basing your argument on an axiom that's not shared with your interlocutor, but then I guess it doesn't matter because it's a digression anyway.
Appropriate, but inadequate. Freenet's SSKs are still not equivalent to real directories, or real access control, no matter how much you bluster.
WORM drives don't drop data like Freenet does. They might reject new writes when they're full, but they don't toss some arbitrary piece of old data on the floor to make room.
False equivalence. You haven't shown that Freenet is in any way like a CD-ROM, and in fact it differs from CD-ROM in this particular regard. Two nodes attempting to read the same data from Freenet simultaneously might well get different data, if one finds a stale copy in someone's cache first and the other finds a fresh copy in another cache. That is not consistent/coherent behavior.
Again: Freenet is not a filesystem. Not only is it not implemented as one, but its very protocol does not support features expected of filesystems (some of which I haven't even gotten around to mentioning yet). Neither of these can change without Freenet becoming something totally different from what it is now, perhaps without abandoning its central goals of strong anonymity and resistance to censorship. There's nothing wrong with Freenet not being a filesystem. Perhaps it's something better; certainly many people seem to see it that way. All it means is that when people are talking about filesystems they're not talking about Freenet and you shouldn't tell them that they are.
Re:You're talking about FreeNet. (Score:2)
You're the one who's persisting in a digression, Grasshopper. As I said earlier, it's not necessary to describe any X that meets a standard to prove that Y does not. Y is Freenet. Freenet does not meet semantic or integration standards to be considered a filesystem, and since I was talking about filesystems I was not talking about Freenet. All this other bullshit about whether it's possible for some other system to meet that standard is interesting but beside the point.
OK, Sparky, I'll spell it out just for the slowest child in the class. I do believe that strong access control can be implemented in a decentralized ("P2P" if you prefer buzzwords) system. I might be incorrect in that belief, and you're welcome to dispute that belief if you choose, but saying that it's not what I'm talking about is just a non-starter.
You're misunderstanding the result, which applies only to obfuscation. There are forms of authentication that are mathematically quite distinct from what that paper discusses.
Ahhh, imitation is the sincerest form of flattery. It's nice to see that you're familiarizing yourself with that list of fallacies. It'd be nicer still if you read it as a list of things to avoid, and not as a list of things to try in your next post.
Firstly, I didn't intend that sentence as a refutation of your argument but as an admonition regarding the same sort of "disrespect" you complained about earlier. It's just slightly hypocritical for you to demand respect for your five minutes of thought while showing none for others' years of study.
As it happens, distributed storage systems are my professional specialty, but I wasn't actually referring to myself. I was thinking more of people like those behind MNet (formerly Mojo Nation), CFS, SFS, OceanStore or Farsite, who all seem to share a belief that decentralization does not preclude strong authentication. They're the ones who've spent years thinking about the authentication angle (I personally have focused more on efficiency and coherency angles). It's your dismissiveness of their efforts, not my own, that I find offensive.
Reexamine how SSKs work, or DBRs, and I'm sure that even you can figure out how inconsistency can occur.
Not really. It might or might not be possible to reconcile strong access control with decentralization, but the prospects seem even gloomier for such a reconciliation with Freenet's anonymity. Similarly, Freenet's insertion and caching behaviors conflict at a pretty fundamental level with the levels of coherency expected of a filesystem. Of course, the entire implementation would have to change as well. My contention is that Freenet and filesystems differ in enough ways - and deep enough ways - that a "Freenet-based filesystem" would no longer resemble Freenet as we know it. Again, that doesn't mean there's anything wrong with Freenet. Certainly the Freenetistas would tell you - as they've told me many times - that filesystem-like behavior is not a goal for Freenet, and that's just fine.
Arguably indeed. Would anyone like some cake [colorado.edu]?
Untrue, but I'm not paid to teach idiots the basics on Slashdot. Do your own homework.
I really find credentialism quite distasteful, but since you seem so insistent on making your own appeals to authority I'll play along. As I mentioned, this is my professional specialty. I have a pretty well-documented record of keeping up with developments in this area, and engaging other "leading figures" in dialog as a peer. Your insinuation that I don't know the terrain is absurd, but might apply to yourself. What can you do to demonstrate that your statements here are based on more than three minutes of reading and one minute of thought? Are you really so sure you want to pursue the issue of background, or can we get back to the actual issues?
Re:And what about the bandwidth? (Score:2)
The ISOS as described in the article runs on top of a traditional operating system; the files you need to boot that traditional OS would still reside locally, as would your applications. It's only the data that would reside elsewhere, which really isn't that different than happens today with NFS- or CIFS-based fileservers from the user's perspective. The difference, supposedly, is that replacing the single NAS server with a fully distributed network results in a more robust system, and one that can scale beyond the local LAN to the whole Internet.
I think I read this article in OMNI (Score:2)
Has Scientific American become nothing but a speculative fiction and PR site for political movements and corporations.
this is not feasable.. (Score:2)
Untill the bandwidth/price ratio available for internet connections grows significantly higher, at present there are only a few exceptional cases where the cost of the data distribution is low enough to make internet distributed computation feasable.
The same applied to clustered storage, with the added problem of the latency to access such storage.
This is not, unfortunately, a tool for helping the average computer consumer. It may, however, be useful for SOME scientific computational problems (ie: ones doing heavy analysis of easily paritionable data), but those are certainly in the minority.
Unfortunately the speed of light over any significant distance soon brings a halt to the scalability of most problems over a widely distributed system, producing a minimum latency which causes the scalability of the system to stop. As computers get faster and storage gets larger this point of decreasing returns gets lower.
Now if we throw in the legal aspects... Can you see the ISP's liking this? how about companies whos equipment is used without their knowledge, and who do we blame for the illegal pr0n being stored unknown to the user on their equipment?
We should not be trying to find ways of consuming bandwith, as it is going to become a more and more valuable resource as computers get more powerfull, instead we should be looking to minimise the bandwidth consumed for given services.
If computers were not still scaling at the rate they are, this may be a useful idea, but that won't happen for some time.
Remember when Scientific American was good? (Score:2)
Sorry for going off-topic, but I just have to grieve any time I see anything about my former favorite magazine. Before computers, walking around reading one of these was how you knew who the real geeks were. Where once you had Nobel Prize winning contributors writing articles that took a week to digest, now you have watered down fluff comparable to Discover or Newsweek. Next time you come across an issue printed before 1985, pick it up and learn something.
Re:Remember when Scientific American was good? (Score:2)
Mass acceptance (Score:2)
This quote sounds like it came straight out of an article about linux. The only differance being that linux is not restricted to the limited set of applications it is capable of running.
If linux is struggling (up to this point) to get mass acceptance and use, I can't see an ISOS getting off the ground for a long time yet or ever.
How to Earn that Karma! (Score:3, Funny)
(Yeah, its a little off-topic. I'm sure the mod's will see the funny in it.)
How to Earn that Karma! (Score:5, Funny)
(Yeah, its a little off-topic. I'm sure the mod's will see the funny in it.)
How to Earn that Karma! (Score:1, Funny)
(Yeah, its a little off-topic. I'm sure the mod's will see the funny in it.)
Re:How to Earn that Karma! (Score:2)
Re:How to Earn that Karma! (Score:2)
Re:How to Earn that Karma! (Score:1, Troll)
Even better is this comment [slashdot.org] that points to an even older paper [freenetproject.org], and then says:
"Guess there is nothing new under the sun."
Got that right...
Re:How to Earn that Karma! (Score:2, Funny)
> As other posters have pointed out this is a duplicate article. [slashdot.org] But hey, turn this repeat to your advantage! Go read the previous posting and repost all the +5 posts as your own, then watch the karma roll in!
Hey... didn't someone suggest that the last time we had a duplicate story?
Not pratical (Score:3, Interesting)
The time of super fast home-PCs is likely to not last very long. The incoming
There is absolutely no reason for 'Mary' to have so much computing power since she doesn't need it. The only real limiting factor today is bandwidth which this article assumes anyway.
What is probably likely in the future though is a more distributed OS. One that is truely network transparent in every facet of operation. I believe there are some rumors floating around about MIT working on something to this effect...
Re:Not pratical (Score:4, Interesting)
Can you back this up with any real facts? Today, for $500, you can own a bare-bones Athlon system, which 20 years ago was a supercomputer, minus a bit of memory.
Even after we hit the Fundamental Barrier, whenever that is, computers will continue to improve for a while due to architecture improvements and innovative designs (like 3-D chips, currently totally unnecessary but providing one road for expansion in the future).
It gets to the point where on the consumer level, in a very short period of time (specifically, *before*
Maybe YOU call a 4GHz Athlon II w/ 512MB of RAM and a 100GB hard drive a thin client, useless to Mary. I call it a dream come true. You have to postulate a Major Breakthrough within the next two-to-three years in display technology for the cost of the display not to swamp the cost of at least (more realistically) a GHz machine with 128 MB of (fast) memory. We'd probably know about it already. So, do you buy the $200 "thin client" that can't do anything on its own, or the $235 "I'd kill for this machine in 1985" that runs fifty times faster, and feels ten times more responsive?
(I made a couple of assumptions in this post. But one way or another, Mary needs a super computer in her home. Either for use that looks like modern use, or to serve as the central server for the house. I, and many others, even amoung the computer non-savvy, will NOT farm my data out to a foriegn entity!
Re:Not pratical (Score:2)
I'm more than happy with three 600 MHz PIIIs at the house. I've got a good deal of RAM (1Gb and 512Mb of PC133 SDRAM), some good video cards (Geforce 3s) and some ATA-100 cards with more than 100Gb of drive-space. There is NO WAY that anyone would describe these boxes as cutting edge. Sure they're better than the average bear, but I don't see replacing anything on these machines for a looooong time.
Please remember I'm pretty damn geeky...these machines are more than capable of doing anything that I want them to do (uh, other than working through 8 million blocks from dnetc every second). They game incredibly well. The big one can easily handle 10 users as an Unreal Tournament server (while it still firewalls my network and acts as a mailserver for 6 domains and runs fetchmail for 5 accounts).
Sure, I'd love to do (WARNING: FreeBSDism ahead) a "make buildworld" in 2.76 minutes. I'd love to talk shit about the magnificent magnitude of my PCs at home. But I don't need to. I'm (depending on what component you look at) about four to eighteen months out-of-date on hardware. I still don't need to do any significant upgrades. The only upgrade that I might need to do in the next year is my video card, and that's not certain.
I'd love to upgrade the hardware...faster is always better...but I don't need to. I've had these boxes in their current incarnation for about a year. I still have absolutely no need to upgrade anything. Sure, I'd like to -- but I don't need to.
Hell, my wife has one of the very first G4s made (one of the "crappy" ones -- back when 128Mb of RAM was "a bunch")...the only time she'll need to upgrade is if the computer bursts into flames. My brother-in-law asked me about buying a computer -- I pointed him to the slowest P4 that Dell sold (he didn't need any more, and unless companies start making DNA anlysis a requirement for registering software, he'll never need anything more).
As long as Joe-Schmoe-Home-User doesn't upgrade his software (and let's be honest...that rarely ever happens unless J-Random-Hacker forces the issue) he doesn't need to upgrade his hardware. I don't know about you, but I've only seen two pieces of software (not counting games) in the last year or three that was worth upgrading hardware for: Mac OS X and Windows 2000.
Intel could swoop down tomorrow with a 39.4 Ghz MegaMonsterKickAssium tomorrow. I wouldn't buy one. I'd think to myself, "Man, I wish I could afford one of those MegaMonsterKickAssiums. But, oh well, I don't really need on. Time to go home to the Pentium IIIs."
(Disclaimer: I'm talking about home PCs...I'm not talking about 3D rendering, real-time computing, massive scientific computing -- just 'average' home PCs).
Re:Not pratical (Score:2)
Agreed! I've got an 800MHz machine, now over a factor of two behind state-of-the-art, rapidly coming up on three, and I have no desire to upgrade. (Wierd feeling, that.) I also use a Pentium 233 and even a 133 laptop, day to day, and the 133 only sometimes bothers me.
But I'm not giving up the 800MHz...
Re:Not pratical (Score:2)
With sufficent bandwidth, why should anyone _ever_ pay for cycles that they do not use. All you really need is a high bandwidth connection with the computational equivalent of a TV with a small reverse feed for input devices.
With the advent of set-top boxes, the age of the PC is coming to an end. It just isn't useful for the typical consumer. The only inhibiting factor today is bandwidth. The internet OS _assumes_ bandwidth availability though. That is its flaw. With proper bandwidth, there is no need for anything other than a glorified TV.
Re:Not pratical (Score:2)
That's an argument for power (as in electricity) conservation, not cycle conservation. Use still tends to grow to match resources. Block off resource growth, on the excuse that it's unused, and you'll block off use growth, a.k.a. "innovation".
Normally one would want to consider the environmental side of increasing resources, but happily (and this is the great miracle of computers), there are no particular downsides (within reason) to increasing cycles. I still don't see the economic value in bastardizing our modern supercomputers, to save quite literally a couple of bucks, when the work could be done at home.
The edges still have vastly more power then the center, and that won't change. Ever. Virtually by definition.
I submit that only a naive analysis of the cost/benefits tradeoff can conclude that it's worth giving people "thin clients", and nothing else. If nothing else, do YOU want to be beholden to the Central Authority for what you can and can't do? Forget the moral issues, even. What if they don't install Quake VI when you want to use it? What if you want to install a software package that the Authority hasn't installed, for whatever reason? How shortsighted to give up the power to do these things, which even "mundanes" do all the time, just to save $15 on the client! (Throw in the personal freedom issues, and it's a *really* dumb idea.)
People still need their own processing centers inside their own homes. (They may chose to connect to that with OTHER thin clients, but there's still that central server, which is what Microsoft,
Price of power (Score:1)
Sounds Like Freenet II (Score:1, Redundant)
Guess there is nothing new under the sun.
Re:Sounds Like Freenet II (Score:2)
Apparently someone took seriously the suggestion of recycling the highly-moderated posts from the previous ISOS thread. The parent is an exact copy of this post [slashdot.org] by Ian Clarke on that thread.
BTW, the answer to the (implied) question in Ian's original paper is no. A useful "distributed decentralized data processing system" cannot be built on top of Freenet, or any other storage system that drops data as soon as the herd stops requesting it.
Interesting... (Score:1)
It's an interesting idea and handy in its own way, but taken to the extreme - would you want your system controlled by a central server, possibly owned either by government or by a consortium of some kind? And all of your files backed up somewhere else on the network, way out of your reach?
I am way too paranoid for this.
Also: Consider Mary's movie, being uploaded in fragments from perhaps 200 hosts. Each host may be a PC connected to the Internet by an antiquated 56k modem--far too slow to show a high-quality video--but combined they could deliver 10 megabits a second, better than a cable modem.
Doesn't this assume that Mary is not connected to the Internet by an antiquated modem? In which case, surely she can't download at 10 megabits a second either...
Whew! (Score:2)
That's one variant of NetBSD we DON'T need developed...
Re:Whew! (Score:1)
Is this really such a good idea? (Score:2)
The financial aspect of it is quite interesting though, information and media could be "virtually free" because of your essentially leased out idle computing resources.
Can you imagine... (Score:2)
Let's notate your Linux box as floodge(0), and ISOS as floodge(1). This higher-order OS would be floodge(2).
It gets better. Now consider an OS of order floodge(N), where N is an unimagineably large but finite number. This would harness the power of millions ** N computers! Truly outrageous horsepower; more teraflops than there are electons in the universe. Just think of how many extra-terrestial intelligences we could discover per second!
Loss (Score:1)
ISOS` (Score:1)
Yeah...sure... *coughDMCAcough*
I'm sure this would really fly. Plus, how secure can this really be?
Not to mention that the current internet infrastructure is not nearly fast enough to handle this.
"Extraordinary parallel data transmission is possible with the Internet resource pool. Consider Mary's movie, being uploaded in fragments from perhaps 200 hosts. Each host may be a PC connected to the Internet by an antiquated 56k modem--far too slow to show a high-quality video--but combined they could deliver 10 megabits a second, better than a cable modem."
Ok, but you're also effectively saturating 200 56k hosts... what if these people are downloading? Also...think of the unnecessary overhead of downloading from two hundred sources at once. I understand how this works.... similar to KaZaA, for example. You download fragments of a file from all over the place. You also see ten different versions of the same file, virus infected files, and inconsistent download speeds. One day you'll download a file at 100k/sec, the next you might be downloading it at 2k/sec. Also, does anyone else realize what havoc these p2p applications (which is really what this ISOS is) wreak on a filesystem? Do a weekend of downloading large files on any of the p2p networks and run a defrag analysis...you'll see exactly what I mean.
I can see this happening some time, just not soon by any stretch. The article does talk about the other use for this technology -- distributed processing. This is actually a viable option....but...newsflash...it's been around for a few years now. See SETI@HOME, Distributed.net, etc. These projects require little dependence on the unreliable internet. Well...that's not true...but they don't rely on massive amounts of data transfer per host. They rely on processing power, which is controlled by the client, for the client -- without relying on the internet.
Anyways, enough of a rant. I just think that the internet as it is now would not be able to take advantage of this technology.
-kwishot
possible if java-based transparency is realized (Score:1)
I looked vigorously for a java based client that can be employed in a distributed setting. I found ONE person working on this about a year ago. But it was not maintained. I would love to see java code extended to a distributed.net client and then embedded inside certain web sites that support distributed.net.
For instance, you go to distributed.net and click 'contribute resources now' bam a java client kicks in and you're crunching keys.
The main barrier to parallel acceptance is in the ease of contribution. Many people don't want to install a client and configure it correcly. Java (even javascript) is now mature enough to handle parallelism inside the browser. Where is it?!
Java is not the fix (Score:2)
You're missing the real problem with all these distributed approaches. There aren't many corporate commercial computing jobs that are limited by compute speed. High-end server applications are usually most limited by disk I/O rates, which none of these ISOS approaches effectively address.
ISOS is great for compute-bound problems, OK for network-bound problems, and lousy for diskIO-bound problems, while the application portfolio willing to pay for speedup is overwhelmingly the reverse, except for a few scattered niches.
RPM speeds on disk drives don't improve at Moore's Law rates. The CPU isn't the bottleneck, the database is the bottleneck.
--LP
P.S. Also, writing parallel-efficient applications remains mostly "hard."
We don't want "The Network As A Computer" (Score:4, Insightful)
We don't want "The Network Is The Computer". Remember mainframes? Remember how we joyfully fled from them?
What we want is to really own our computer power.
We want a very clear sense of "This is my computer" and "This is my data". I can do what I like with it.
Think folks, what is all the fuss about security and file sharing? Ownership. This is my data to own (keep private) and my data to share (if I choose).
Complexity and installation difficulties steal our sense of ownership. When the computer is a burden, we don't want to own it. Complexity robs us of choice.
The correct fix is not an ISOS, or retreat to mainframe days. The correct fix is to simplify and make things easy.
I don't want my work computer to be my home computer. My employer and I definitely want a strong sense of separation on that front thank you.
Forget these silly pipe dreams, and concentrate on easing the pains of ownership so that we have strength to share.
All this is a silly confusion over....
Remove the confusion between the above items and the desire for silly things like "The Network Is The Computer", DMCA etc goes away.
Re:We don't want "The Network As A Computer" (Score:4, Insightful)
And remember what happened when the Internet came along? Everyone suddenly wanted to be part of a network of machines. Of course the Internet is a diverse set of services running on a diverse and redundant network of machines rather than dumb terminals attached to controlled and homogenous hardware, so its a great step forward from the days of mainframes. Nevertheless the Internet is very much a distributed computer system.
When I use Slashdot I am consuming resources on a remote computer. These days I probably use more CPU power and storage that lives out on the Net than lives on my machine. I don't know about you, but I love it. Much better than the days of standalone machines.
What has happened is we've moved from the days of monolithic, tightly controlled mainframes and terminals, through the personal computer revolution and on to a mixed peer-to-peer and client-server world that gives you the advantages of both approaches.
Of course there are issues, and security and control are amongst the biggest. But these can be solved ultimately, and I no more want to go back to standalone PCs than I want to go back to mainframes.
What we want is to really own our computer power.
Then disconnect your machine from the Net, and you will be happy. However don't presume to speak for the vast majority of computer users who seem extremely happy to be part of a large, distributed network of machines and systems.
Re:We don't want "The Network As A Computer" (Score:1)
In this system, each computer is effectively renting out space on other people's computers. When you need extra CPU cycles for a massive Bryce rendering you just created, the work can be distributed among multiple computers which have allowed your computer to rent out their cpu cycles in exchange for the future possibility of using your cycles when you're not. Believe it or not, your processor isn't at 100% utilization as you type messages to Slashdot.
We want a very clear sense of "This is my computer" and "This is my data". I can do what I like with it.
An ISOS wouldn't affect this any. The idea behind the ISOS is to pool the unallocated resources of the collective computers on a network. If the local machine needs a resource (long-term storage, memory, cycles) it can use its own resources without question. In the ISOS model, it can even get resources in excess of what it is capable of. Who wouldn't like to have 120 GB of storage when one only posesses a 60 GB hard drive? On the other end, suppose your drive is full of other people's data. If you need more space, just delete the other people's data. It won't affect them (thanks to the miracles of redundant disribution).
As far as data goes, nothing should change either. A particular user will be the only one who has the ability to access a specific piece of information. It's not like a use will be able to just browse other people's files that are stored on your computer. Before you cry out "I can't even look at the files on my computer!", stay the thought. Technically, you can look at the files, but since they're encrypted, you won't see much. And if this annoys you, you can just delete them.
What I've said earlier isn't exactly true. I said that you could delete other user's file backup fragments, or that you could request CPU time, etc., implying that the user can do this. These are operations that should be handles by the ISOS. Suppose your hard drive is fully utilized, between local applications and other people's files. If you really need to store something locally, the shared space will automatically be shrunk, the excess returning to the local system.
I don't want my work computer to be my home computer. My employer and I definitely want a strong sense of separation on that front thank you.
Why is this separation necessary? Obviously, the hardware will exist in two separate areas. But other than that, how is it detrimental that the desktop "at work" be disconnected from the desktop "at home"? In the network created from ISOS, this idea of separation by use is irrelevant. Each computer is simply a resource user and supplier. Some computers might be specialized at doing one type of computation better than others, so it will get appropriate work.
In another scenario, the ISOS Resource Pool at your job could be completely separate from a global Resource Pool (internet). So each computer at work would share resources only with other computers at work.
I liken the ISOS to the idea of any public resource, like roads or parks, monuments, etc. The world would be much less friendly were you required to personally own everything you used.
An ISOS isn't about control of a single computer, it's about effective use of the aggregate resources that computers in general can provide. It's all about the resources. Your computer is simply a resource that can be used to accomplish something.
I personally challenge the view that one can "own" the resources of the computer. Most certainly I own my hardware, but can anyone own the ability to compute that is inherent in everything? But this straying off topic. Perhaps in another discussion group...
Re:We don't want "The Network As A Computer" (Score:2)
Ah, but my 'net connection is....(and that has nothing to do with my typing speed....) I trawl the 'net for info.
I think the authors mistake is calling it an OS. reading the article closer it isn't an OS. It is more a load balancing general purpose RPC stub with several huge problems....
You haven't been watching all this fuss about "inappropriate use" have you? Have you forgotten Borland's shenanigans where the bosses raided the employees email? No thanks, both the bosses and I want seperation of work and private.
Your computer is simply a resource that can be used to accomplish something.
Your head is simply a resource that can accomplish something. Can I borrow it for awhile...? Its obviously not at 100% utilization :-)
Seen before (Score:1, Informative)
Wow, 3 years on Slashdot and this is the first time I've caught a duplicate story before anyone else. What do I win? :) A free Kuro5hin.org account? :)
woo (Score:2, Funny)
How about desk-sized? (Score:2)
of computers on the same desk efficiently, why not start there?
Moral of the story (Score:1)
The insight one gains from reading the article is, of course, not that all developers should drop whatever they are doing and rush to develop The OS Which Will Cure All The Ills Of The World. Nor would it be possible for the desktop user to make any money in the manner described: if computing became so cheap, the cost of processing the transfer of money would far exceed the value of the computing time contributed.
The message is that P2P could indeed be the killer app for the desktop that linux is waiting for; world domination is indeed possible if only we are a little more inventive. What OS is best equipped to support massively distributed computing? *nix, of course. Windows users already have a hard time protecting their machines from the internet. What we need is more robust P2P protocols designed with security and scalability in mind.
In the meantime, check out distributed.net
Great Leaps in Tech, ignoring other leaps? (Score:1)
Got ISOS, Need Killer-app (Score:1)
on something extremely similar:
Jtrix [sourceforge.net]
Technically, Jtrix has micro-kernel-like agents (nodes)
running on host machines. Applications consist of
code fragments that can be executed on the nodes.
There is a mechanism for binding to a remote service,
and that's pretty much all you need as a basic
platform. Of course, it's convenient to have some
support services (eg. file storage), but that's
already in userland (as it should be).
A lot of this is implemented and working.
We have got one problem though: we need a killer
app to get people running thoses nodes
Because no one has said it yet....... (Score:1)
no, not internet spaning..... (Score:2)
Re:repeat? (Score:4, Informative)
Re:Oops! It's already been done. (Score:1)
The controller should also know the locations of the projects masters, so it acts as a data proxy unless otherwise allowed and can fetch newer versions of the cores in real time. Of course, the cores can just languish on disk until I sign them, should I chose to.
IOW, something like the now-abandoned distributed.net v3.
I'm wandering. Sorry.
_Knots
Re:Stealing from the poor, giving to the rich (Score:1)