Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
News

Scientific American Article: Internet-Spanning OS 236

Hell O'World writes: "Interesting article on Scientific American outlining what they call an Internet-scale operating system (ISOS). 'The Internet-resource paradigm can increase the bounds of what is possible (such as higher speeds or larger data sets) for some applications, whereas for others it can lower the cost.'"
This discussion has been archived. No new comments can be posted.

Scientific American Article: Internet-Spanning OS

Comments Filter:
  • Hmmmm (Score:3, Funny)

    by G-funk ( 22712 ) <josh@gfunk007.com> on Monday March 04, 2002 @12:14AM (#3103892) Homepage Journal
    Judging from the photo it seems to be a new form of 3d tetris.... This shall definitely shape the future!
  • This sounds like (among other things) a larger-scale Seti@Home project - sharing your unused cpu cycles to solve larger problems. I'm not sure how well this would be received, especially given the recent concerns over what these clients are actually transmitting.
  • I personally don't like the idea of my OS being spread across multiple machines, or other people being able to use my computing power. If I'm not using my computer, I don't want others using it, reducing it's lifetime. I like knowing that everything I do is controlled by me, on my system. It's a little unnerving to think that my files would be distributed all around the world on other machines. (can we say security?) No thanks, I'll stick with how I'm setup now.
    • Reducing its lifetime? What? Oh yeah, it's the same thing as being anti-excersize because "god gives me x number of heartbeats, I want to use them carefully..." Now don't get excited.
      • Uh, it's not the same thing at all. When I buy hardware, I buy it for ME to use. Not everyone else. Same thing with a car. Would you want people using your car when you're not, and have to still cover all the maintance costs associated with it? Or buy all the gas for it? (Which is the equivalence of paying for the electricity bill)
        • You ever heard of sharing... your are not using it, why not let someone else use it. The previous poster was only trying to make the point that it is not going to make a significant difference on the wear of your computer. It looks like you are just trying to justify being selfish.
          • I don't care how much or how little wear and tear. I don't like the idea of other people using my computer in any way. Also, think about it, if you used your computer 24 hours a day now, it could have a dramatic effect on the lifetime of your parts.

            If I had control over who used my resources, I am sure I would share some of it with some people (or entities rather). I'd rather have SETI using my computer's power rather than someone who wants to watch a movie or play games, etc.

            Also, if I buy a top of the line computer, why should I spend more when others can go out and get cheap ones and use my computing power?
        • I buy it for ME to use. Not everyone else. Same thing with a car.

          Yeah, same thing with the road! umm...

        • Uh, it's not the same thing at all. When I buy hardware, I buy it for ME to use. Not everyone else. Same thing with a car. Would you want people using your car when you're not, and have to still cover all the maintance costs associated with it? Or buy all the gas for it? (Which is the equivalence of paying for the electricity bill)

          What you said included the idea that someone else using your computer's idle CPU cycles will reduce it's lifetime. This is a rather foolish notion if you leave your computer on all the time anyhow, and so the analogy makes sense. If you do turn your computer on and off frequently in hopes of extending it's lifetime, you might want to consider the argument that leaving the computer on full-time is less stressful to it than constantly flicking the power switch on. I think the downsides of the two choices pretty much balance each other out.

          Now, if you want compensation for your electricity for leaving it on all of the time, the article states that you're going to get some small monetary amount for all of that unused processing power.
    • well, it would be great for the masses, but for those of us who USE our computers, I would stick with what we have as well.......but with technology like this, it is we the innovators who would spearhead it, as with P2P, wireless networks, etc. ahhhhhhhthe irony
    • If I'm not using my computer, I don't want others using it...

      So long as you have to opt-in to enable this, I don't see a problem letting others use my idle cpu time. It actually makes me happy when I see it happening. Mod me as freak-1, but personally, I'd love to see a seti@home/distributed.net type thing that would allow downloadable tasks so that the client would not be limited to just doing crypto or statistical analysis. Sure, the security would be a bitch. There would have to be a responsible group or something that would validate programs before releasing them to prevent virus mayhem, or worse. But, how nice would it be for researchers around the world if they could have cheap access to vast amounts of cpu time?

      Or, am I just high.

  • Efficacy (Score:1, Interesting)

    by Digitalia ( 127982 )
    This would be most efficient if nations would universally implement a data network as broad and all-encompassing as the phone system. The state in question could offer access for free in exchange for cycles from users' computers, creating an enormous computer at federal/municipal disposal. Offer opt-out at a price, and it seems to me that this would be perfectly friendly to all.
    • yeah, and while we're in "Make-believe Land," can I have a pony? and about three Playboy Playmates?

      I dont mind using SetiAtHome or something similar, hell, i'm helping out...but i dont want to _have_ to share my resources with Joe Public, if i wanted that, i'd be a communist

      ....and as i read once "In capitalism, man exploits man, in communism, its the other way around."

  • I mean I get enough junk e-mail as it is, without unethical crooks having an entire OS dedicated to the task...

    ...Oh, you said spanning OS.

    Nevermind.
    • >I mean I get enough junk e-mail as it is

      I actually initially read "spamming" in the header, and I wondered... isn't sendmail relatively platform-independent these days?

      If it could be trusted to be used ethically, this is one GOOD application for an "auto-patch download" feature ala Win XP, be able to toggle someone's open relay crap... unfortunately, that kind of power opens up all kinds of sticky wormcans that I don't wanna think about right now.

      Plus the number of places that need (for good reason) to sandbox test a patch/change before rolling into production...

      -l
    • Ha! It took me untill I read this comment to realize it wasn't an OS designed to deliver spam faster. Ha Ha.
  • by Indras ( 515472 )
    You think booting up your computer takes forever now, just wait until you have to download all the .dll's over a 28k line!

    Eh, enough trolling. I seriously hope this isn't some pathetic .NET rip-off, and that it works out alright.
    • by RyMon ( 547040 )
      People either aren't understanding, or aren't reading properly. All the computers wouldn't be spread out everywhere, only what people choose to put on the net. Your operating system, files, programs, etc. are still on your hard drive, but you can choose to sell extra space on your drive in exchange for some cash, and vice-versa. You can buy a gig of space spread out over the net to store some extra files on, and your files end up in tiny fragments on hundreds or thousands of other computers like yours.
      • You can buy a gig of space spread out over the net to store some extra files on, and your files end up in tiny fragments on hundreds or thousands of other computers like yours.

        And how is that a GOOD thing?

        If I need a gig of space, I throw out a gig of crap.

        If I am out of crap, I can spend $50 on an extra hard drive. Or $0.20 on a CD-R.

        The only way to make distributed storage appealing is to make it so vast that nothing I can reasonably buy will compare with it, and that seems unlikely. And if it DID happen, I'd need a fat pipe to match.

        In the end, I want to keep my computer to myself, except for the http server I run.
  • Extraordinary parallel data transmission is possible with the Internet resource pool. Consider Mary's movie, being uploaded in fragments from perhaps 200 hosts. Each host may be a PC connected to the Internet by an antiquated 56k modem--far too slow to show a high-quality video--but combined they could deliver 10 megabits a second, better than a cable modem.

    I suppose that's great and all, but what if Mary is on a 56k modem? Doesn't really help all that much. I do understand the point they're making though.
  • how they talk about Mary's computer decoding an mpeg for someone in Helsinki and sending it to her -- and then her telling her to computer to stop (presumably, our Finnish friend just got screwed out of a key part of his movie.)

    This is all very cute -- but some of it is laughable. The rest of it, decoding DNA sequences, sharing movies (the binary, not the decompression of) -- it already exists.

    In short: Big deal.
    • Let's just talk about her decoding video and sending it to someone.

      Totally uncompressed video is FUCKING HUGE. Basically imagine the size of a bitmap the same resolution + bpp of the video, then multiply that size by 30*seconds of video (for 30 fps video, which is pretty standard I think).

      So she could decompress it, and then if she wanted to send it to this Finnish guy there would either have to be a T3 or so between them...

      He was probably just watching some porno anyway.
  • With license v6 by M$, if you install it on your network, and run any other M$ product on that network (even back to Win 3.1), then the license is upgraded to v6 for all of those machines. Where is the boundary? If I do a VPN across the internet to another machine on another LAN, does that mysterious license switch occur? If I am globally connected to many machines on the internet, does the license switch occur on all of these machines?

    Kickstart
  • Seen before (Score:4, Funny)

    by SilentChris ( 452960 ) on Monday March 04, 2002 @12:21AM (#3103928) Homepage
    Uh, you mean like this OS [slashdot.org]?

    Wow, 3 years on Slashdot and this is the first time I've caught a duplicate story before anyone else. What do I win? :) A free Kuro5hin.org account? :)

  • Group 1>>People who already devote cycles to folding proteins, looking for E.T., or factoring primes.

    Group 2>>Those who don't.

    Now, for the people in group 1, they are already using something similar to an ISOS, only they are dedicating their computer to something they deem worthy--and I don't think a woman watching a movie in Helsinki is worthy..

    Group 2 chooses not do devote their spare cycles for some reason. There are many reasons, but for some people, it is paranoia (of other's data on their computer). To take it a step further, to the ISOS--it's one thing to be looking at nekkid pix of your girlfriend on YOUR hard drive...but what if it was actually being stored on someone's computer in Orem, Utah (which raises some interesting jurisdiction and local ordinance laws)...nekkid pix, mp3s, divx movies of Hilary Rosen, whatever....(of course, your mp3s of metallica's music might partly be stored on Lars' computer or something....wouldn't that be a hoot)
    • by Radical Rad ( 138892 ) on Monday March 04, 2002 @01:47AM (#3104198) Homepage
      Devoting compute cycles to specific, worthy causes is great, but the point of an ISOS would be to make all connected hosts more powerful and efficient. If I want to factor a large prime or predict the weather, I might have hundreds or maybe thousands of otherwise idle computers available to help with the task. So each processor is constantly busy.

      Privacy is very important but can certainly be worked out. For one thing, data could be stored in "bit stripes" so that each byte of your data is split into 8 separate streams but stored in more than 8 foreign hosts for redundancy and availability reasons. In that way no one could reconstruct any portion of your data from fragments on their drive and no laws could be broken by storing chains of bits.

      Also private and public space could be partitioned off so that things you want kept on your system would stay there and only data associated with your weather predicting program would get stored on the ISOS. And quotas would need to be enforced so that if you donate 100GB to the ISOS storage then you may store, say 30GB (due to redundancy) in the distributed system yourself.

      And perhaps your CPU's MIPS rating and uptime could be tracked to keep things fair. Then it would be almost like your computer storing up its processor cycles and getting them back all at once when you have a job to run. Grid computing makes sense and a World Wide Grid could make sense if it is feasible and the logistics could be worked out. Imagine everyone everywhere having the power of a supercomputer at their disposal.
  • Imagine how long it would take to defrag the whole Internet!

  • That's what I thought it said until the 3rd readthrough. Interestingly, the description holds either way.
  • well,
    for starters, someone shoot the guy that said 'it's called windows.'
    Anyhow, on a more realistic note, This is an excellent idea. I've often wondered why clustering is limited to computers owned by one individual or organization, why not a worldwide, scalable, cluster. I guess the biggest concerns are security (who gets to see my data, who gets to copy my data, who can put data on my machine, who can execute code on my machine?) In a utopian society this would be easily resolved with trust. Fact is, if everyone uses the same setup, eventualy, someone will find a way to exploit it, I forsee alot of problems with designing a working, usable ISOS. However, there may be a simpler solution w/ similar if not same results. Why start at the OS level? why not a platform independant application with a lightweight encryption algorythym, redundancy would be a must (if someone kills there computer while it's working on your data there should be several backups to failover to). Also, more importantly, selectivity of what processes, files, etc... get migrated, and what ones don't. I'm no developer, so I'm sure I've made many errors in this reply, but it's just my opinion, I'd love to hear others.

    Blocsync
  • I was reading the screen fast, without my glasses, and thought the title was "Internet Spamming OS".

    Phew!
  • they'll sell their customer's hard drive space and processor time? Let's say I give CompanyX the right to process gene sequences on my machine. What if they sell that right to other companies? A little freaky, but a good idea though.
  • by Anonymous Coward
    In the 1999 paper [freenetproject.org] [freenetproject.org] "A Distributed Decentralized Information Storage and Retrieval System" which formed the basis for the Freenet [freenetproject.org] [freenetproject.org] project, the following future direction is suggested:
    Generalisation of Adaptive Network for data processing
    A longer term and more ambitious goal would be to determine whether a distributed decentralised data processing system could be constructed using the information distribution Adaptive Network [Freenet] as a starting point. Such a development would allow the creation of a complete distributed decentralised computer

    Guess there is nothing new under the sun.

    • Heck, as far as simple file-sharing goes, it sounds like Freenet I. (I realize the article is about an operating system, but this discussion seems to be mostly about sharing files) As far as security goes, currently existing file-sharing programs allow you to *choose* which files you want to share, and which you don't, and other users can only request one of the files your computer says it has- no one has actual access to your hard drive*). I would imagine that any decent processor-sharing program would allow similar customization- setting the maximum amount of processor time used, how long the computer would have to be idle before it kicks in, temporarily disabling it, etc. As for wearing out your computer (as someone else mentioned earlier), come on! You'll probably upgrade the old components long before they burn out due to a bit of extra use. Most people run screen-saver programs that keep the processor busy during idle times, anyway. Might as well get some use from it. As for distributed Operating Systems, I agree with whoever said that bandwidth was a more important factor than the program itself. Until everone is connected at 10 Gigabits or so, distributed programs will probably only be used for large, slow things like number-crunching and file downloads - not OS's, which require an immediate response. Something to keep in mind for the future, though. *Barring potential glitches. Really, though, I haven't heard about many security problems in the current programs.
  • This was posted to /. maybe a couple of weeks ago (although I can't seem to find the reference).

    --Dave Storrs
  • by NOT-2-QUICK ( 114909 ) on Monday March 04, 2002 @12:27AM (#3103960) Homepage
    From the supposed real-life example in the article:

    "Its disk contains, in addition to Mary's own files, encrypted fragments of thousands of other files. Occasionally one of these fragments is read and transmitted; it's part of a movie that someone is watching in Helsinki."

    I wonder how upset this individual in Helsinki would be if Mary decided to format her hard disk in the midst of his movie... Oh, but you say that the same information is distributed on other workstations as a redundancy precaution. I wonder how much bandwidth that cost to prevent this 'just in case' scenario?

    While I can certainly appreciate the added value of distributed processing power and multilocational data sources, exactly how is having these massive amounts of data running over the net affecting bandwidth availability?

    In my opinion, the lack of a truly distributed ISOS is a bit trivial until we achieve a higher grade of internet connectivity for everyone!
    • I wonder how upset this individual in Helsinki would be if Mary decided to format her hard disk in the midst of his movie

      The Helsinki user is no worse off in this scenario than if Mary's machine were a web server.

      I wonder how much bandwidth that cost to prevent this 'just in case' scenario?

      We all know that such "just in cases" do actually occur. The only solution to data-loss is redundant copies of the data, maintained either manually (explicit backups) or automatically (transparent mirroring or replication). The authors' idea is to go for automatic replication, and once you have that you might as well use the replicas to improve performance by allowing them to serve data to nearby nodes. This can actually result in less overall bandwidth than traditional approaches, because each node is going somewhere relatively close to get data instead of choking up a central server.

      That actually highlights a flaw in the example as given in the article. It would be quite abnormal for someone in Helsinki to be going half-way around the world to get the data, because there should be a nearer replica. It would be more accurate, though perhaps less compelling, to say that Mary's machine was being used as a "staging area" for other local users watching the same movie from Helsinki that Mary just watched ten minutes ago. That would IMO convey the idea of an ISOS (actually the data-store part of it) actually reducing network bandwidth while also improving robustness.

      • The distributed file system thing is exactly what FreeNet already does. However, the key differences between local data and network data, which nobody seems willing to address fully, is what happens when the 'net' runs out of space? Some data gets replicated more than other data--typically by frequency of use--meaning data that's really really important to one person may not be available because too many people are watching Britney Spears movies, and they get replicated more rather than the so-called important data.

        Replication of data has tremendous cost: bandwidth, time, and storage space. Its retrieval is also non-trivial. Local data is by far more manageable and secure, so much so that a fully distributed system just doesn't make sense. What does make sense is that people would prefer to carry their data with them.

        Consider instead, a bootable business card CD burned with your favorite OS, and a key-sized multi-gig USB memory drive. Constrained to something that will fit in your pocket very comfortably, or even in a normal sized wallet, you can have everything the way you want it, anywhere you go. No need to add the complexity of network distribution at all.

        Too often, visionaries put faith in a silver bullet to cure all ails. I prefer simple solutions to solve individual problems effectively.

        • No, I am most emphatically not talking about Freenet. For one thing Freenet is not a filesystem. I can't mount it, I can't use plain old read/write (have to use Freenet-specific APIs), I can't use memory mapping (or even access individual blocks without reading the whole file), I can't execute images off it, there are no directories, no permissions, no data consistency. It flunks just about every test of whether something is a filesystem. Worse, Freenet drops data that's not being actively requested; that's OK for what Freenet was designed to do, but totally unacceptable for a filesystem. Got it? Good. Now we can move on.

          Replication of data has tremendous cost: bandwidth, time, and storage space.

          Replication also has tremendous benefits, most notably robustness and performance. I alluded to the latter in my last post. If nodes are smart enough to get data from the nearest replica, then total bandwidth use goes down. The more replicas there are, the fewer network resources each replica-served request will consume (unless somebody's so stupid that they put all the replicas right next to one another). It's the same principle used by FTP mirrors and content-distribution networks, and it works.

          Local data is by far more manageable

          ...until you, or you plus multiple other people, need to access that same data from multiple places - perhaps concurrently. Then you get right into the same sort of replication/consistency mess you were trying to avoid, except that instead of having the attendant problems solved once for everyone using the filesystem each person has to solve it separately.

          What does make sense is that people would prefer to carry their data with them.

          Actually I'd rather not have one more physical object to carry around, drop/damage/misplace, etc., or have to remember to copy the data I want for a business presentation onto my portable device. What I'd prefer would be that when Imove to a new location connecting to the network also connects me to my files, wherever they may be, without unnecessary compromises in performance or security. Those of limited imagination might not believe it, but that will be possible quite soon.

          Consider instead...a key-sized multi-gig USB memory drive.

          Aside from the administrative-inconvenience issues noted above, where are you going to find such a device? How much will it cost, compared to the software-solution cost of zero dollars? How fast will it be? How reliable? What will you do when it breaks and you didn't make a backup?

          No need to add the complexity of network distribution at all.

          The complexity of network distribution should be hidden from the user anyway. The whole idea of a distributed data store is that the complexity is hidden in the system so that users' lives are simpler. What you're proposing is to "shield" users from complexity that they wouldn't see anyway, and leave them responsible for decisions (replication, data placement, backup) that the system should be handling for them. That's not a positive tradeoff.

          Too often, visionaries put faith in a silver bullet to cure all ails

          So do non-visionaries, and the word you're looking for is "ills" not "ails". Your silver USB bullet doesn't solve anything.

          • Got it? Good. Now we can move on.

            Show some class. Treat people you don't know with some measure of respect, particularly if you disagree with them.

            Freenet is not a filesystem. I can't mount it, I can't use plain old read/write [...] there are no directories, no permissions, no data consistency

            It's not a file system because it daemons haven't been written to make it appear so. You could write specific applications that talk directly with NFS, but nobody does it. You're wrong about the last three points, though. It does have encrypted shared private namespaces, where people would have to have your public key to read the files. That's rudimentary file permissions for read. You also cannot publish to that directory unless you use the private key, which is rudimentary file permissions for write. No data consistency? I'm not sure what you mean here, since it's checksumed and encrypted and passed around in pieces all over the place, it seems very self-consistent. Perhaps you should read up on it. Just because you don't have to supply a password and username doesn't mean there's no permissions. It's done the only way a truly P2P system can be done without a centralized authentication system can be done. Anything else puts all your eggs in one basket. That single point of failure boots reliability out the window.

            Replication also has tremendous benefits

            Agreed. But only for certain types of data that can take advantage of it. How does it improve the file which is only used in one place, by one person, when sitting at a specific computer? It doesn't. Replication wastes resources in this case. Taking that choice away from users is a step in the wrong direction, then.

            ...until you, or you plus multiple other people, need to access that same data from multiple places - perhaps concurrently.

            Again, agreed. However, there is an identifiable subset of data that needs this treatment. NFS and VPN handles this quite nicely. The hard part is setting a random machine up to access the files. Hence the bootable CD configured to do so.

            The complexity of network distribution should be hidden from the user anyway. The whole idea of a distributed data store is that the complexity is hidden in the system so that users' lives are simpler.

            Complexity exists and has resultant issues, whether the user directly interacts with the it or not. Due to the distributed nature of a purely networked file system it's always possible that a critical file is unavailable due to any number of errors along the way. So what use is a uniform filesystem where ALL files can be missing or available at the whim of a 3rd party? A blend of traditional with the ability to mount network-shared data is a much better fit.

            where are you going to find such a device? How fast will it be? How reliable?

            Don't you read /.? USB drives in the 1gb range that are the size of a pocket key are available today, for about $900. Multi-gig ones will be along shortly, no doubt. They're memory sticks. Faster than hard drives, and being solid-state, more reliable.

            How much will it cost, compared to the software-solution cost of zero dollars?

            Uh, nothing is free. There's a current bandwidth and time cost for retrieval that is quite high. Adding software cannot remove that burden--it can only mask the entropy in a system, not reduce it.

            For what it's worth, people around here say 'ails'. :-)
            • Perhaps I was overly curt with you earlier. I just get really tired of hearing "you mean Freenet" any time distributed storage is discussed. How would you like it if somebody said "you mean Windows" every time you mentioned operating systems, no matter how un-Windows-like the proposed operating system was? How "classy" would you be in correcting such a statement? Would you, perhaps, call it horseshit [slashdot.org]?

              You're wrong about the last three points, though. It does have encrypted shared private namespaces, where people would have to have your public key to read the files. That's rudimentary file permissions for read. You also cannot publish to that directory unless you use the private key, which is rudimentary file permissions for write.

              Private namespaces are not the same as directories, and the rudimentary access control they offer is in no way comparable to the sorts of permissions that any legitimate filesystem on any modern general-purpose OS is expected to support.

              No data consistency? I'm not sure what you mean here, since it's checksumed and encrypted and passed around in pieces all over the place, it seems very self-consistent.

              "Consistency" (a.k.a. coherency) has a very specific and generally well-understood meaning in this context, which you should learn before you start spouting off about whether Freenet exhibits it. In a consistent system, if node A writes to a location and then node B reads it, B will (assuming no other writes in the interim) receive the value A wrote and not some older "stale" value. There are varying levels and types of consistency, representing different guarantees about the conditions under which the system guarantees that B will get current data, but Freenet does not ensure consistency according to even the loosest definitions.

              How does it improve the file which is only used in one place, by one person, when sitting at a specific computer? It doesn't.

              You're apparently not considering the advantage of not losing data if that one computer fails. Some people would certainly consider that advantage to be considerable.

              In any case, I don't think I ever said that all data should be placed in the distributed data store. In fact, I rather distinctly remember saying the exact opposite. Modern operating systems permit the use of multiple filesystem types concurrently, so there's nothing keeping you from keeping data local if you so choose.

              USB drives in the 1gb range that are the size of a pocket key are available today, for about $900.

              $900/GB? And you're seriously comparing that to a software-only solution that might carry zero dollar cost? Do you really think your silver USB bullet is the ideal solution for everyone, i.e. that there aren't plenty of people who would be better served by the distributed-storage alternative?

              • Not to belabor this overly, but...

                Private namespaces are not the same as directories, and the rudimentary access control they offer is in no way comparable to the sorts of permissions that any legitimate filesystem on any modern general-purpose OS is expected to support.

                Then you're proposing this 'shared drive network of computers' have a central server. There's no alternative which offers authorization for users, which allows proper access controls. I don't deny that the permission model for FreeNet isn't exactly standard OS fare. I specifically made that distinction, in fact. But I absolutely defy you to come up with a pure P2P way to do it with identical security to modern OSes without a central authority. The article did not appear to be promoting someone running servers to authenticate users, so my assertion is entirely appropriate.

                but Freenet does not ensure consistency according to even the loosest definitions.

                FreeNet has (mostly) the same properties as a WORM drive's file system. Once written, data cannot be changed. Someone could very well write a file system driver that makes such access possible, and FreeNet would appear to the user similarly as a cdrom drive that they can write to. Isn't ISO9660 a coherent model?

                Good point about hardware crashes, by the way. I overlooked that. And the reason I brought up the USB drive is in the fairly near future, the prices will be very, very reasonable. I don't think any of the solutions we're discussing will be feasible immediately, so looking a few years in the future is appropriate for a basis of comparison. And still, software is not zero cost. Perhaps from the user's perspective, but it has great cost for the infrastructure.
                • Then you're proposing this 'shared drive network of computers' have a central server.

                  I really wish you'd stop telling me what I'm saying. I'm not talking about central servers now any more than I was talking about Freenet earlier.

                  I absolutely defy you to come up with a pure P2P way to do it with identical security to modern OSes without a central authority.

                  "Pure" P2P? Identical security? That's commonly referred to as moving the goalposts [colorado.edu] and it should be beneath you. It's not necessary to describe any X that meets a standard to prove that Y does not.

                  You obviously don't think it's possible to reconcile strong access control with decentralization. That's fine, but don't you think it's a little disrespectful to assume that other people who've spent a lot more time than you studying the problem have given up too. You're basing your argument on an axiom that's not shared with your interlocutor, but then I guess it doesn't matter because it's a digression anyway.

                  The article did not appear to be promoting someone running servers to authenticate users, so my assertion is entirely appropriate.

                  Appropriate, but inadequate. Freenet's SSKs are still not equivalent to real directories, or real access control, no matter how much you bluster.

                  FreeNet has (mostly) the same properties as a WORM drive's file system. Once written, data cannot be changed.

                  WORM drives don't drop data like Freenet does. They might reject new writes when they're full, but they don't toss some arbitrary piece of old data on the floor to make room.

                  FreeNet would appear to the user similarly as a cdrom drive that they can write to. Isn't ISO9660 a coherent model?

                  False equivalence. You haven't shown that Freenet is in any way like a CD-ROM, and in fact it differs from CD-ROM in this particular regard. Two nodes attempting to read the same data from Freenet simultaneously might well get different data, if one finds a stale copy in someone's cache first and the other finds a fresh copy in another cache. That is not consistent/coherent behavior.

                  Again: Freenet is not a filesystem. Not only is it not implemented as one, but its very protocol does not support features expected of filesystems (some of which I haven't even gotten around to mentioning yet). Neither of these can change without Freenet becoming something totally different from what it is now, perhaps without abandoning its central goals of strong anonymity and resistance to censorship. There's nothing wrong with Freenet not being a filesystem. Perhaps it's something better; certainly many people seem to see it that way. All it means is that when people are talking about filesystems they're not talking about Freenet and you shouldn't tell them that they are.

  • about 15 years ago. Down to the gee-whiz! Jetsons prose.

    Has Scientific American become nothing but a speculative fiction and PR site for political movements and corporations.

  • Untill the bandwidth/price ratio available for internet connections grows significantly higher, at present there are only a few exceptional cases where the cost of the data distribution is low enough to make internet distributed computation feasable.

    The same applied to clustered storage, with the added problem of the latency to access such storage.

    This is not, unfortunately, a tool for helping the average computer consumer. It may, however, be useful for SOME scientific computational problems (ie: ones doing heavy analysis of easily paritionable data), but those are certainly in the minority.

    Unfortunately the speed of light over any significant distance soon brings a halt to the scalability of most problems over a widely distributed system, producing a minimum latency which causes the scalability of the system to stop. As computers get faster and storage gets larger this point of decreasing returns gets lower.

    Now if we throw in the legal aspects... Can you see the ISP's liking this? how about companies whos equipment is used without their knowledge, and who do we blame for the illegal pr0n being stored unknown to the user on their equipment?

    We should not be trying to find ways of consuming bandwith, as it is going to become a more and more valuable resource as computers get more powerfull, instead we should be looking to minimise the bandwidth consumed for given services.

    If computers were not still scaling at the rate they are, this may be a useful idea, but that won't happen for some time.

  • Sorry for going off-topic, but I just have to grieve any time I see anything about my former favorite magazine. Before computers, walking around reading one of these was how you knew who the real geeks were. Where once you had Nobel Prize winning contributors writing articles that took a week to digest, now you have watered down fluff comparable to Discover or Newsweek. Next time you come across an issue printed before 1985, pick it up and learn something.
    • Yep, that's why I'm building up a little collection of the old ones - some from the early 50's (one has an article by Einstein) and a bunch of late 50's; a few from the 60's and just about all from 1970 thru 1998 picked up from a ebay moving sale real cheap local pickup (whohoo!). It's an interesting half century time machine - one does see a slow change from rather rigorous science research to ... today.
  • An isos suffers from a familiar catch-22 that slows the adoption of many new technologies: Until a wide user base exists, only a limited set of applications will be feasible on the ISOS. Conversely, as long as the applications are few, the user base will remain small. But if a critical mass can be achieved by convincing enough developers and users of the intrinsic usefulness of an ISOS, the system should grow rapidly.

    This quote sounds like it came straight out of an article about linux. The only differance being that linux is not restricted to the limited set of applications it is capable of running.

    If linux is struggling (up to this point) to get mass acceptance and use, I can't see an ISOS getting off the ground for a long time yet or ever.

  • by Cheshire Cat ( 105171 ) on Monday March 04, 2002 @12:31AM (#3103977) Homepage
    As other posters have pointed out this is a duplicate article. [slashdot.org] But hey, turn this repeat to your advantage! Go read the previous posting and repost all the +5 posts as your own, then watch the karma roll in! :)

    (Yeah, its a little off-topic. I'm sure the mod's will see the funny in it.)
  • Not pratical (Score:3, Interesting)

    by lkaos ( 187507 ) <[anthony] [at] [codemonkey.ws]> on Monday March 04, 2002 @12:33AM (#3103983) Homepage Journal
    This article makes one fatal assumption: Consumers will always purchase more powerful equipment than they need.

    The time of super fast home-PCs is likely to not last very long. The incoming .NET and dotGNU waves are likely to make thin clients much more realistic.

    There is absolutely no reason for 'Mary' to have so much computing power since she doesn't need it. The only real limiting factor today is bandwidth which this article assumes anyway.

    What is probably likely in the future though is a more distributed OS. One that is truely network transparent in every facet of operation. I believe there are some rumors floating around about MIT working on something to this effect...
    • Re:Not pratical (Score:4, Interesting)

      by Jerf ( 17166 ) on Monday March 04, 2002 @01:56AM (#3104216) Journal
      The time of super fast home-PCs is likely to not last very long. The incoming .NET and dotGNU waves are likely to make thin clients much more realistic.

      Can you back this up with any real facts? Today, for $500, you can own a bare-bones Athlon system, which 20 years ago was a supercomputer, minus a bit of memory.

      Even after we hit the Fundamental Barrier, whenever that is, computers will continue to improve for a while due to architecture improvements and innovative designs (like 3-D chips, currently totally unnecessary but providing one road for expansion in the future).

      It gets to the point where on the consumer level, in a very short period of time (specifically, *before* .NET takes off), it costs very, very little to put in an Athlon, versus a Pentium 100, and that cost is swamped by the display cost. You still need memory on the client side for buffering. You still want a hard drive on the client side for other buffering (like video; a one minute buffer fills RAM pretty fast, but on any conceivable real-wrld future network, we'll need those buffers).

      Maybe YOU call a 4GHz Athlon II w/ 512MB of RAM and a 100GB hard drive a thin client, useless to Mary. I call it a dream come true. You have to postulate a Major Breakthrough within the next two-to-three years in display technology for the cost of the display not to swamp the cost of at least (more realistically) a GHz machine with 128 MB of (fast) memory. We'd probably know about it already. So, do you buy the $200 "thin client" that can't do anything on its own, or the $235 "I'd kill for this machine in 1985" that runs fifty times faster, and feels ten times more responsive?

      (I made a couple of assumptions in this post. But one way or another, Mary needs a super computer in her home. Either for use that looks like modern use, or to serve as the central server for the house. I, and many others, even amoung the computer non-savvy, will NOT farm my data out to a foriegn entity! .NET does not eliminate the need for fast computers, it just moves it. And the need for more power will be with us for a while yet. We're in a computation bubble here, but voice technology, video streaming, REAL teleconferencing, better video games, and a lot of truly desirable things are still waiting for us over the computation power horizen. And that's just the applications we KNOW about...)
      • The era of the super-fastest home PC might be over.

        I'm more than happy with three 600 MHz PIIIs at the house. I've got a good deal of RAM (1Gb and 512Mb of PC133 SDRAM), some good video cards (Geforce 3s) and some ATA-100 cards with more than 100Gb of drive-space. There is NO WAY that anyone would describe these boxes as cutting edge. Sure they're better than the average bear, but I don't see replacing anything on these machines for a looooong time.

        Please remember I'm pretty damn geeky...these machines are more than capable of doing anything that I want them to do (uh, other than working through 8 million blocks from dnetc every second). They game incredibly well. The big one can easily handle 10 users as an Unreal Tournament server (while it still firewalls my network and acts as a mailserver for 6 domains and runs fetchmail for 5 accounts).

        Sure, I'd love to do (WARNING: FreeBSDism ahead) a "make buildworld" in 2.76 minutes. I'd love to talk shit about the magnificent magnitude of my PCs at home. But I don't need to. I'm (depending on what component you look at) about four to eighteen months out-of-date on hardware. I still don't need to do any significant upgrades. The only upgrade that I might need to do in the next year is my video card, and that's not certain.

        I'd love to upgrade the hardware...faster is always better...but I don't need to. I've had these boxes in their current incarnation for about a year. I still have absolutely no need to upgrade anything. Sure, I'd like to -- but I don't need to.

        Hell, my wife has one of the very first G4s made (one of the "crappy" ones -- back when 128Mb of RAM was "a bunch")...the only time she'll need to upgrade is if the computer bursts into flames. My brother-in-law asked me about buying a computer -- I pointed him to the slowest P4 that Dell sold (he didn't need any more, and unless companies start making DNA anlysis a requirement for registering software, he'll never need anything more).

        As long as Joe-Schmoe-Home-User doesn't upgrade his software (and let's be honest...that rarely ever happens unless J-Random-Hacker forces the issue) he doesn't need to upgrade his hardware. I don't know about you, but I've only seen two pieces of software (not counting games) in the last year or three that was worth upgrading hardware for: Mac OS X and Windows 2000.

        Intel could swoop down tomorrow with a 39.4 Ghz MegaMonsterKickAssium tomorrow. I wouldn't buy one. I'd think to myself, "Man, I wish I could afford one of those MegaMonsterKickAssiums. But, oh well, I don't really need on. Time to go home to the Pentium IIIs."

        (Disclaimer: I'm talking about home PCs...I'm not talking about 3D rendering, real-time computing, massive scientific computing -- just 'average' home PCs).
        • The era of the super-fastest home PC might be over.

          Agreed! I've got an 800MHz machine, now over a factor of two behind state-of-the-art, rapidly coming up on three, and I have no desire to upgrade. (Wierd feeling, that.) I also use a Pentium 233 and even a 133 laptop, day to day, and the 133 only sometimes bothers me.

          But I'm not giving up the 800MHz... ;-)
      • For this all to work as specified in the article, high bandwidth connections must be available.

        With sufficent bandwidth, why should anyone _ever_ pay for cycles that they do not use. All you really need is a high bandwidth connection with the computational equivalent of a TV with a small reverse feed for input devices.

        With the advent of set-top boxes, the age of the PC is coming to an end. It just isn't useful for the typical consumer. The only inhibiting factor today is bandwidth. The internet OS _assumes_ bandwidth availability though. That is its flaw. With proper bandwidth, there is no need for anything other than a glorified TV.
        • With sufficent bandwidth, why should anyone _ever_ pay for cycles that they do not use.

          That's an argument for power (as in electricity) conservation, not cycle conservation. Use still tends to grow to match resources. Block off resource growth, on the excuse that it's unused, and you'll block off use growth, a.k.a. "innovation".

          Normally one would want to consider the environmental side of increasing resources, but happily (and this is the great miracle of computers), there are no particular downsides (within reason) to increasing cycles. I still don't see the economic value in bastardizing our modern supercomputers, to save quite literally a couple of bucks, when the work could be done at home.

          The edges still have vastly more power then the center, and that won't change. Ever. Virtually by definition.

          I submit that only a naive analysis of the cost/benefits tradeoff can conclude that it's worth giving people "thin clients", and nothing else. If nothing else, do YOU want to be beholden to the Central Authority for what you can and can't do? Forget the moral issues, even. What if they don't install Quake VI when you want to use it? What if you want to install a software package that the Authority hasn't installed, for whatever reason? How shortsighted to give up the power to do these things, which even "mundanes" do all the time, just to save $15 on the client! (Throw in the personal freedom issues, and it's a *really* dumb idea.)

          People still need their own processing centers inside their own homes. (They may chose to connect to that with OTHER thin clients, but there's still that central server, which is what Microsoft, .NET, and the "thin client" crowd keep trying to do away with. For their benefit, not yours.)
  • Besides the above reasons as to why it wouldn't take off, if the person isn't getting paid more then it costs to leave the computer on there isn't any incentive for joe blow to just leave it on all day and contribute. For some people (the ones that are probably already using dnet or seti) this isn't a problem because they usually have it on, however most families aren't going to leave the computer on all day all the time, plus many computers go into a suspend mode which saves power.
  • In the 1999 paper [freenetproject.org]"A Distributed Decentralized Information Storage and Retrieval System" which formed the basis for the Freenet [freenetproject.org] project, the following future direction is suggested:
    Generalisation of Adaptive Network for data processing
    A longer term and more ambitious goal would be to determine whether a distributed decentralised data processing system could be constructed using the information distribution Adaptive Network [Freenet] as a starting point. Such a development would allow the creation of a complete distributed decentralised computer

    Guess there is nothing new under the sun.

    • Apparently someone took seriously the suggestion of recycling the highly-moderated posts from the previous ISOS thread. The parent is an exact copy of this post [slashdot.org] by Ian Clarke on that thread.

      BTW, the answer to the (implied) question in Ian's original paper is no. A useful "distributed decentralized data processing system" cannot be built on top of Freenet, or any other storage system that drops data as soon as the herd stops requesting it.

  • Formalising peer-to-peer filesharing in conjunction with a completely redesigned OS, the concepts of distributed computing and micropayments? Why not?... after all, then you can find some nifty acronym for the whole thing. Which turns out to have been previously used in this case by the International Seminar for Oriental Studies.

    It's an interesting idea and handy in its own way, but taken to the extreme - would you want your system controlled by a central server, possibly owned either by government or by a consortium of some kind? And all of your files backed up somewhere else on the network, way out of your reach?

    I am way too paranoid for this.

    Also: Consider Mary's movie, being uploaded in fragments from perhaps 200 hosts. Each host may be a PC connected to the Internet by an antiquated 56k modem--far too slow to show a high-quality video--but combined they could deliver 10 megabits a second, better than a cable modem.

    Doesn't this assume that Mary is not connected to the Internet by an antiquated modem? In which case, surely she can't download at 10 megabits a second either...
  • I was really worried for a second there, I thought the headline was "Internet spamming OS".

    That's one variant of NetBSD we DON'T need developed...
    • Yea! It WILL be an internet spamming OS. It will spam Internet by its own internal communications, encrypted and redundant...
  • Wouldn't a newly made distributed system either be sued out of existence by people in power, or controlled from its inception by them? I can't picture it working beyond a Distributed.net/SETI kind of thing.

    The financial aspect of it is quite interesting though, information and media could be "virtually free" because of your essentially leased out idle computing resources.
  • I higher order scaling OS. One that ties together thousands of Internets, each one full of hundreds of millions of computers.

    Let's notate your Linux box as floodge(0), and ISOS as floodge(1). This higher-order OS would be floodge(2).

    It gets better. Now consider an OS of order floodge(N), where N is an unimagineably large but finite number. This would harness the power of millions ** N computers! Truly outrageous horsepower; more teraflops than there are electons in the universe. Just think of how many extra-terrestial intelligences we could discover per second!

  • The sole reason it won work, what if the 3 computers with my taxes die or end up off like her laptop, and its April 14? i know it sounds easy, but even so, It seems risky........not to mention the privacy concerns... but those have already been covered.
  • "The second is distributed online services, such as file storage systems, databases, hosting of Web sites, streaming media (such as online video) and advanced Web search engines"

    Yeah...sure... *coughDMCAcough*
    I'm sure this would really fly. Plus, how secure can this really be?
    Not to mention that the current internet infrastructure is not nearly fast enough to handle this.

    "Extraordinary parallel data transmission is possible with the Internet resource pool. Consider Mary's movie, being uploaded in fragments from perhaps 200 hosts. Each host may be a PC connected to the Internet by an antiquated 56k modem--far too slow to show a high-quality video--but combined they could deliver 10 megabits a second, better than a cable modem."

    Ok, but you're also effectively saturating 200 56k hosts... what if these people are downloading? Also...think of the unnecessary overhead of downloading from two hundred sources at once. I understand how this works.... similar to KaZaA, for example. You download fragments of a file from all over the place. You also see ten different versions of the same file, virus infected files, and inconsistent download speeds. One day you'll download a file at 100k/sec, the next you might be downloading it at 2k/sec. Also, does anyone else realize what havoc these p2p applications (which is really what this ISOS is) wreak on a filesystem? Do a weekend of downloading large files on any of the p2p networks and run a defrag analysis...you'll see exactly what I mean.

    I can see this happening some time, just not soon by any stretch. The article does talk about the other use for this technology -- distributed processing. This is actually a viable option....but...newsflash...it's been around for a few years now. See SETI@HOME, Distributed.net, etc. These projects require little dependence on the unreliable internet. Well...that's not true...but they don't rely on massive amounts of data transfer per host. They rely on processing power, which is controlled by the client, for the client -- without relying on the internet.

    Anyways, enough of a rant. I just think that the internet as it is now would not be able to take advantage of this technology.

    -kwishot
  • yes this has been featured in numerous other postings, but every time it is mentioned in a theoretical capacity. what i would like to see is a practical approach to this problem via transparent java clients.

    I looked vigorously for a java based client that can be employed in a distributed setting. I found ONE person working on this about a year ago. But it was not maintained. I would love to see java code extended to a distributed.net client and then embedded inside certain web sites that support distributed.net.

    For instance, you go to distributed.net and click 'contribute resources now' bam a java client kicks in and you're crunching keys.

    The main barrier to parallel acceptance is in the ease of contribution. Many people don't want to install a client and configure it correcly. Java (even javascript) is now mature enough to handle parallelism inside the browser. Where is it?!
    • Popular Power [popularpower.com] had a java-based client. It basically ran off a JDK it helped install on your system, not via the browser. (They ran out of money, dunno what happened to the code.) It would run when your screen saver turned on, which I think makes more sense than asking a user to visit a website.

      You're missing the real problem with all these distributed approaches. There aren't many corporate commercial computing jobs that are limited by compute speed. High-end server applications are usually most limited by disk I/O rates, which none of these ISOS approaches effectively address.

      ISOS is great for compute-bound problems, OK for network-bound problems, and lousy for diskIO-bound problems, while the application portfolio willing to pay for speedup is overwhelmingly the reverse, except for a few scattered niches.

      RPM speeds on disk drives don't improve at Moore's Law rates. The CPU isn't the bottleneck, the database is the bottleneck.

      --LP

      P.S. Also, writing parallel-efficient applications remains mostly "hard."
  • Good Grief! People's memories are short.

    We don't want "The Network Is The Computer". Remember mainframes? Remember how we joyfully fled from them?

    What we want is to really own our computer power.

    We want a very clear sense of "This is my computer" and "This is my data". I can do what I like with it.

    Think folks, what is all the fuss about security and file sharing? Ownership. This is my data to own (keep private) and my data to share (if I choose).

    Complexity and installation difficulties steal our sense of ownership. When the computer is a burden, we don't want to own it. Complexity robs us of choice.

    The correct fix is not an ISOS, or retreat to mainframe days. The correct fix is to simplify and make things easy.

    I don't want my work computer to be my home computer. My employer and I definitely want a strong sense of separation on that front thank you.

    Forget these silly pipe dreams, and concentrate on easing the pains of ownership so that we have strength to share.

    All this is a silly confusion over....

    • What I do and can do... (I want unlimited freedom and choice)
    • What intellectual product I create... (It costs nothing to make another copy, so why limit distribution?)
    • What hard product I create... (It costs much effort to make a copy, and requires hard inputs.)
    • What I own... (What I control)
    • What is private... (Thoughts and activities that concern me only thank you)

    Remove the confusion between the above items and the desire for silly things like "The Network Is The Computer", DMCA etc goes away.

    • by gwernol ( 167574 ) on Monday March 04, 2002 @01:56AM (#3104217)
      We don't want "The Network Is The Computer". Remember mainframes? Remember how we joyfully fled from them?

      And remember what happened when the Internet came along? Everyone suddenly wanted to be part of a network of machines. Of course the Internet is a diverse set of services running on a diverse and redundant network of machines rather than dumb terminals attached to controlled and homogenous hardware, so its a great step forward from the days of mainframes. Nevertheless the Internet is very much a distributed computer system.

      When I use Slashdot I am consuming resources on a remote computer. These days I probably use more CPU power and storage that lives out on the Net than lives on my machine. I don't know about you, but I love it. Much better than the days of standalone machines.

      What has happened is we've moved from the days of monolithic, tightly controlled mainframes and terminals, through the personal computer revolution and on to a mixed peer-to-peer and client-server world that gives you the advantages of both approaches.

      Of course there are issues, and security and control are amongst the biggest. But these can be solved ultimately, and I no more want to go back to standalone PCs than I want to go back to mainframes.

      What we want is to really own our computer power.

      Then disconnect your machine from the Net, and you will be happy. However don't presume to speak for the vast majority of computer users who seem extremely happy to be part of a large, distributed network of machines and systems.
    • The purpose of an ISOS is not to go back to mainframes. Mainframes are central locations to house data and run applications that are used by dumb clients that don't run anything for themselves. An ISOS would be practically the opposite. Rather than having all programs running on a central server, each program could conceivably run off of several different computers. One computer would still control the data that is on a "terminal", except in the ISOS view, that computer is the terminal itself.

      In this system, each computer is effectively renting out space on other people's computers. When you need extra CPU cycles for a massive Bryce rendering you just created, the work can be distributed among multiple computers which have allowed your computer to rent out their cpu cycles in exchange for the future possibility of using your cycles when you're not. Believe it or not, your processor isn't at 100% utilization as you type messages to Slashdot.

      We want a very clear sense of "This is my computer" and "This is my data". I can do what I like with it.

      An ISOS wouldn't affect this any. The idea behind the ISOS is to pool the unallocated resources of the collective computers on a network. If the local machine needs a resource (long-term storage, memory, cycles) it can use its own resources without question. In the ISOS model, it can even get resources in excess of what it is capable of. Who wouldn't like to have 120 GB of storage when one only posesses a 60 GB hard drive? On the other end, suppose your drive is full of other people's data. If you need more space, just delete the other people's data. It won't affect them (thanks to the miracles of redundant disribution).

      As far as data goes, nothing should change either. A particular user will be the only one who has the ability to access a specific piece of information. It's not like a use will be able to just browse other people's files that are stored on your computer. Before you cry out "I can't even look at the files on my computer!", stay the thought. Technically, you can look at the files, but since they're encrypted, you won't see much. And if this annoys you, you can just delete them.

      What I've said earlier isn't exactly true. I said that you could delete other user's file backup fragments, or that you could request CPU time, etc., implying that the user can do this. These are operations that should be handles by the ISOS. Suppose your hard drive is fully utilized, between local applications and other people's files. If you really need to store something locally, the shared space will automatically be shrunk, the excess returning to the local system.

      I don't want my work computer to be my home computer. My employer and I definitely want a strong sense of separation on that front thank you.

      Why is this separation necessary? Obviously, the hardware will exist in two separate areas. But other than that, how is it detrimental that the desktop "at work" be disconnected from the desktop "at home"? In the network created from ISOS, this idea of separation by use is irrelevant. Each computer is simply a resource user and supplier. Some computers might be specialized at doing one type of computation better than others, so it will get appropriate work.

      In another scenario, the ISOS Resource Pool at your job could be completely separate from a global Resource Pool (internet). So each computer at work would share resources only with other computers at work.

      I liken the ISOS to the idea of any public resource, like roads or parks, monuments, etc. The world would be much less friendly were you required to personally own everything you used.

      An ISOS isn't about control of a single computer, it's about effective use of the aggregate resources that computers in general can provide. It's all about the resources. Your computer is simply a resource that can be used to accomplish something.

      I personally challenge the view that one can "own" the resources of the computer. Most certainly I own my hardware, but can anyone own the ability to compute that is inherent in everything? But this straying off topic. Perhaps in another discussion group...

      • Believe it or not, your processor isn't at 100% utilization as you type messages to Slashdot.

        Ah, but my 'net connection is....(and that has nothing to do with my typing speed....) I trawl the 'net for info.

        I think the authors mistake is calling it an OS. reading the article closer it isn't an OS. It is more a load balancing general purpose RPC stub with several huge problems....

        • Security. The ISOS allows any program to be run remotely on the clients PC. I don't think I need enumerate the long list of vulnerabilities associated with that.
        • It is only useful when the inputs and outputs are very very small compared to the compute time. Very few commercial apps satisfy that.
        Why is this separation necessary?

        You haven't been watching all this fuss about "inappropriate use" have you? Have you forgotten Borland's shenanigans where the bosses raided the employees email? No thanks, both the bosses and I want seperation of work and private.

        Your computer is simply a resource that can be used to accomplish something.

        Your head is simply a resource that can accomplish something. Can I borrow it for awhile...? Its obviously not at 100% utilization :-)

  • Seen before (Score:1, Informative)

    by Anonymous Coward
    Uh, you mean like this OS [slashdot.org]?

    Wow, 3 years on Slashdot and this is the first time I've caught a duplicate story before anyone else. What do I win? :) A free Kuro5hin.org account? :)

  • woo (Score:2, Funny)

    by nomadic ( 141991 )
    Just type find with no arguments and you can see every file on every computer on the net...
  • There are still no simple ways to use a pair
    of computers on the same desk efficiently, why not start there?

  • The insight one gains from reading the article is, of course, not that all developers should drop whatever they are doing and rush to develop The OS Which Will Cure All The Ills Of The World. Nor would it be possible for the desktop user to make any money in the manner described: if computing became so cheap, the cost of processing the transfer of money would far exceed the value of the computing time contributed.
    The message is that P2P could indeed be the killer app for the desktop that linux is waiting for; world domination is indeed possible if only we are a little more inventive. What OS is best equipped to support massively distributed computing? *nix, of course. Windows users already have a hard time protecting their machines from the internet. What we need is more robust P2P protocols designed with security and scalability in mind.
    In the meantime, check out distributed.net :-)
  • As the cliche goes, if it sounds too good to be true it probably is. This would work wonderfuly if all the other technology on the horizion stand still. Specificly, Quantum Computers. I understand nothing about them, but all I've heard their usable for is breaking encription. Acctually, they will render all current encription worthless. I don't worry about my data being decryted because it's on my box, but if it's spread out everywhere...
  • For the last two years, we have been working
    on something extremely similar:
    Jtrix [sourceforge.net]

    Technically, Jtrix has micro-kernel-like agents (nodes)
    running on host machines. Applications consist of
    code fragments that can be executed on the nodes.
    There is a mechanism for binding to a remote service,
    and that's pretty much all you need as a basic
    platform. Of course, it's convenient to have some
    support services (eg. file storage), but that's
    already in userland (as it should be).

    A lot of this is implemented and working.
    We have got one problem though: we need a killer
    app to get people running thoses nodes ...
  • You couldn't have a beauwolf cluster of these, now could you.
  • because to have an effective OS, you would need to have trusted access to those resources....raise your hand if you are going to trust some stupid OS that you have no control over to use your spare Proc cycles, memory space, and Hard drive?

Dynamically binding, you realize the magic. Statically binding, you see only the hierarchy.

Working...