Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
News

Next Windows to Have New Filesystem 1008

ocipio writes: "Microsoft is currently planning a new filesystem. Its planned that the new filesystem will make searches easier, faster, and more reliable. Windows will also be less likely to break, and easier to fix when it does. The new technology will cause practically all Microsoft products to be rewritten to take advantage of it. Called Object File System, OFS will be found in the next major Windows release, codenamed Longhorn. More information can be found here at CNET."
This discussion has been archived. No new comments can be posted.

Next Windows to Have New Filesystem

Comments Filter:
  • Predictions (Score:3, Insightful)

    by PeterClark ( 324270 ) on Wednesday March 13, 2002 @12:32PM (#3156837) Journal
    It will be proprietary, obfuscated, and impossible for other operating systems to read/write to. Furthermore, it will have all sorts of copy management "features" built in to it.


    Yes, I'm cynical. But really, why shouldn't I?


    :Peter

  • Can you say DRM? (Score:3, Insightful)

    by indole ( 177514 ) <fluxist@ g m a i l.com> on Wednesday March 13, 2002 @12:36PM (#3156871) Homepage

    To be honest, NTFS seems to be a tip-top file system to me. The only thing I can imgaine it missing is hardcore digital rights management (cant wait).

    What a clever way to force DRM down every consumers throat: break every single windows program created prior to OFS.

    fuckers.

  • by JeanBaptiste ( 537955 ) on Wednesday March 13, 2002 @12:36PM (#3156872)
    ... my fat32 and NTFS seem to work okay... I dont think my concerns with microsoft are a result of their filesystems... this isnt a microsoft bash, I just think they would do better to focus their efforts elsewhere...
  • by Hostile17 ( 415334 ) on Wednesday March 13, 2002 @12:39PM (#3156907) Journal

    Windows will also be less likely to break, and easier to fix when it does.

    Doesn't MS say this about all the new versions of thier products ? Not that Windows hasn't improved, it certainly has, but they also never seem to live up to the hype.

  • by Aexia ( 517457 ) on Wednesday March 13, 2002 @12:41PM (#3156930)
    Everyone would have to buy new versions of all their office software! Isn't that handy for MS?

    I'll pass. I may be running (pre-installed) XP on my Dell but I'm still using Office 97. Why?

    BECAUSE IT WORKS JUST FINE.

    I don't need to "upgrade" to something even more bloated and bug ridden.
  • My question (Score:3, Insightful)

    by kb3edk ( 463011 ) on Wednesday March 13, 2002 @12:41PM (#3156931)
    How long do ./ readers think it will be until the Linux kernel and/or Samba will be able to read OFS shares?
  • by einer ( 459199 ) on Wednesday March 13, 2002 @12:42PM (#3156940) Journal
    I don't think that this is what they are after. It is possible that they will have to open their file i/o api's soon because of the anti-trust case here and in the EU. I truly (naively?) believe that they are moving to a better filesystem because it's a better filesystem, not because they want to break interop between *nix and MS. I'm also an eternal optimist.
  • by Mr. Neutron ( 3115 ) on Wednesday March 13, 2002 @12:43PM (#3156952) Homepage Journal
    It would seem to me that IF Microsoft is going out of its way to develop a new FS, and IF that FS is not going to contain the copy-protection goodies that the entertainment industry is clamoring for, that Microsoft is basically thumbing its nose at the MPAA and RIAA, and siding fully with PC users and hardware manufacturers.

    Rather a good thing to know.

  • Re:OT: Refreshing! (Score:3, Insightful)

    by Pope Slackman ( 13727 ) on Wednesday March 13, 2002 @12:44PM (#3156960) Homepage Journal
    and the FAT32 took a major hit.

    XP is an NT-based OS...why were you using FAT32 at all when NTFS is available?

    C-X C-S
  • by cybrthng ( 22291 ) on Wednesday March 13, 2002 @12:45PM (#3156977) Homepage Journal
    "Do you honestly believe that the benifit of a faster search is enough incentive to rewrite such a major part of the OS?"


    Yes, you are a troll. Is it wrong for Microsoft to advance File systems and state specific reasons and right to preach about the many choices in file systems linux/unix has?

    When your talking .NET and future technologies that Microsoft is pushing, and if you have *EVER* used Windows XP you will realize that having faster searches and file retrievals is MUCH needed.

    Say when you open a folder of 5,000+ mp3's and it searches the ID3 tags of every song and displays artist/title as part of the description, having an optimized file system for quicker searches of data on the disk will only streamline this more.

    so yes, this is cool, and yes, there is alot more then just "fast searches" as you put it.
  • by costas ( 38724 ) on Wednesday March 13, 2002 @12:46PM (#3156987) Homepage
    Completely agreed. This extends the OO nature of Windows down to the FS level. .NET extends it up to the network level. It's a huge play by MS and it's a huge step forward.

    If only Windows Scripting Host gave you a more dead-easy way to script/tinker with the Windows objects...
  • by W2k ( 540424 ) on Wednesday March 13, 2002 @12:56PM (#3157076) Journal
    Hey trollie, read the article. Or better yet, the Slashdot posting. This file system has nothing to do with DRM.

    Firstly, there is no such thing as "Microsoft Digital Rights Management Software" (Media Player supports DRM, but only for WMA's) and Microsoft has nothing what-so-ever to gain from including DRM features into the file system. They know and we know that Longhorn with DRM will go down the toilet, while Longhorn without DRM will sell just as well as WinXP, probably better.

    The second thing you got wrong is that this system is not (just) about speeding up searches. It's about replacing an antiquated system that's been around since MS-DOS with something future-proof, faster, and more reliable. Considering they've been working on this for 10+ years, they'll probably succeed eventually. And when they do .. boy, don't even get me started on that.

    Now, for something constructive. When will we see this in Linux? Surely, if Microsoft can do this, so can the people working on Linux. Riiight?
  • How do you figure? (Score:3, Insightful)

    by Pope Slackman ( 13727 ) on Wednesday March 13, 2002 @12:56PM (#3157078) Homepage Journal
    Everyone would have to buy new versions of all their office software!

    Ummm...Why?
    Changing the FS would really only affect the way the data is stored on the drive, the AppFS interface should be abstracted by the OS.

    C-X C-S
  • Re:Veritas? (Score:3, Insightful)

    by King_TJ ( 85913 ) on Wednesday March 13, 2002 @12:57PM (#3157088) Journal
    Yeah, it's probably far too early to tell what they really have in mind. (I doubt MS is really sure yet. If anything, they're probably still in the early stages of experimenting with different ideas to see what works best for them.)

    The rough idea I got was that they want to make the file system a giant database, though. This would be a vast departure from NTFS, FAT32, or any other file system used today. They're saying "instead of creating database files on a hard drive, each for a specific application - and then creating all of these independent files and folders for the applications themselves, why not dump *everything* into one large database that *is* the file system?"

    I would think that they wouldn't *have* to rewrite apps like Office in this scenario, but they'd *want* to - to take advantage of the new functionality possible with such a "database as filesystem" concept. Without a code rewrite, the Office apps wouldn't be able to import content via advanced search features. (EG. Import all photos on my drive related to the company I'm writing my letter to, above, and let me browse these thumbnails so I can find the ones I need.)
  • Try Again. (Score:2, Insightful)

    by NetJunkie ( 56134 ) <jason.nash@CHICAGOgmail.com minus city> on Wednesday March 13, 2002 @12:57PM (#3157091)
    NTFS has been a journaling file system similar to those for a very long time. Linux was the one playing catchup on that end.
  • by jonbrewer ( 11894 ) on Wednesday March 13, 2002 @01:02PM (#3157146) Homepage
    They proved with AS/400 that using a DB for the file system was the way to go. It's too bad they did it way ahead of their time.

    I'm personally glad MS is finally changing their OS. Now that my workstation has 70GB of files, searches are taking an incredibly long time.

    I have less than 100,000 files on my workstation. Each has maybe 10 searchable attributes. A full search on this can take over five minutes. (Athlon 800Mhz w/ 7200 RPM IBM drives on a Promise controller)

    I know from experience that querying an Oracle database (on a cheap 500mhz linux box) on 100,000 records with 30 non-indexed columns/attributes generally takes around 2-3 seconds.

    Imagine if MS were able to build a file system with such capabilities.
  • by TheTomcat ( 53158 ) on Wednesday March 13, 2002 @01:06PM (#3157174) Homepage
    Why not give me a version number and some way to know what program created it.

    right click on pretty much any file (win2k), select the summary tab, click advanced, fill in the author and revision number.

    Many executable files also contain pre-filled version numbers on the version tab: \WINNT\explorer.exe has company name, internal name, language, original filename, product name and product version as well as file version, description and copyright notice.

    But, yes, windows does still rely heavily on file extensions to determine type/opener. I don't mind that so much. It's easy to change a file from one format to another by changing its extension. It's I've had trouble with files on the mac that are bonafide jpegs, opening in simpletext (I didn't have PC exchange set up properly, and the file types werent set to PHSD (or whatever the magic number for photoshop is)).
  • by blakestah ( 91866 ) <blakestah@gmail.com> on Wednesday March 13, 2002 @01:07PM (#3157193) Homepage
    NTFS has features like ACLs, streams, etc that aren't in FFS or UFS. Also, support for transparent compression and encryption, also sparse files. There's support for quotas in the filesystem, and it's quite resistant to the effects of fragmentation. It's journalled and supports Unicode. It's actually a very good filesystem, once of the better parts of NT.

    Right. This begs the question of why bother ?

    The push et al is just a load of hype to push the upgrade path. They are going to engineer a database into their file system "to make searches faster" because doing it the slocate way would not force another round of complete system upgrades on consumers.

    You may have also noticed that Outlook and Office will need to be rewritten to "take advantage" of the new file system. So not only will they leverage OS upgrades, but Office upgrades as well. They are planning to rip out a perfectly good file system (which is called "antiquated" in the article) to make billions of dollars, and the press releases are all about consumer benefit.

    And consumer benefit, as you have noted, is essentially nil.
  • C: (Score:1, Insightful)

    by Anonymous Coward on Wednesday March 13, 2002 @01:10PM (#3157210)
    but we still have to put up with C:/D:... When will they just use regular mount points??!?!
  • by AB3A ( 192265 ) on Wednesday March 13, 2002 @01:17PM (#3157261) Homepage Journal
    Does this remind anyone of the VMS operating system's Record Management System?

  • Re:Predictions (Score:5, Insightful)

    by killmenow ( 184444 ) on Wednesday March 13, 2002 @01:19PM (#3157280)
    The Register [theregister.co.uk] had and article [theregister.co.uk] about this ages ago.

    Think SQL Server 2003 = OFS

    Not only do they get their new FS with nifty new features (DRM yada...yada...) but imagine this scenario...

    MS Sales Rep: "You need a database?"
    Potential Sucker^H^H^H^H^H^HCustomer: "We're looking at Oracle."
    Rep: "Oracle's OK and all, but you know...the TCO is over the top. With Windows NG [Next Generation], a top-notch, state-of-the-art DB is included for free."
    Customer: "Really? Hmm...well, if it's already in there, I might as well use it."

    Don't knock it, it worked with IE.
  • by Junta ( 36770 ) on Wednesday March 13, 2002 @01:19PM (#3157281)
    Getting rid of extensions is not necessarily a good thing...

    First off, all kinds of things are already designed around the extension idea, redesigning everything won't work that easily. Also, users are used to the concept of extensions. MS is very much aware of this. If you take, say, Windows 98 and go to edit file type associations, the list is sorted by type description. This doesn't work, as the user is not likely to know the string attributed to the file type he is thinking of. For example, recently my fiancee wanted to change the default viewer for .avi files. Of course she checked a and then w and then mi* area and no guesses were right. The right answer was "Movie Clip" (way to generic, but anyway...) Now look at the same dialog under, say XP. You will see that things are sorted by extension. While it may seem clunky and inelegant conceptually, in practice it is elegant. I would say identifying type is important enough to belong in the filename.

    Secondly, these extended attributes are not portable. Many widely used protocols would be unable to automatically notify client machine of this information, forcing the user on every file downloaded to set the type of the file manually. Sure you can embed them directly in the file, but who gets to dictate the format? I can bet you that MS would extend any standard to break compatibility with other systems if it existed. By tying in the reading of these extended attributes more tightly with the opening of files, you are inviting MS to come and make life harder on non-MS platforms, as well as technologies such as DRM to have more success...

    Finally, what about performance? As it stands, a system based on /etc/magic would be prohibitively slow. If you suddenly designate a part of the file space as needing to contain type information, tons of legacy problems can arise, and that field better be pretty long, and have a standard organization dictating what gets to use what codes. You can keep the information out of the file and efficient through Extended Attributes (already possible with NTFS, XFS, Be's FS, among others), but as I mentioned before this would not work cross-platform.

    The system as it stands now works quite well. Windows explorer already works to "protect the user from himself" by not allowing renaming of extension on a file easily. We have an established, cross-platform standard for identifying file types, we don't need to blow that...
  • Re:OT: Refreshing! (Score:3, Insightful)

    by IamTheRealMike ( 537420 ) on Wednesday March 13, 2002 @01:20PM (#3157290)
    Yes, I think this is great news!

    Look at it this way - some of us may wish that the whole world used Linux, but it doesn't. It uses Windows. So when MS announces that they're taking a big risk (and it is a big risk) to try and make such an enormous upgrade to Windows, I think we should be happy that a few years down the road, if Windows is still dominant then at least people will be benefiting from this technology.

    But ... this doesn't mean we should just sit back and go - well done Microsoft! After all, I recall reading about something similar over at the reiserFS page... how long until Linux users get this technology also?

  • Re:OT: Refreshing! (Score:2, Insightful)

    by archen ( 447353 ) on Wednesday March 13, 2002 @01:21PM (#3157295)
    maybe because just about any other OS you would dual boot into, can read AND WRITE to fat32. Using win2k, I use NTFS for the program junk, but have a fat 32 partition for the pile of stuff I use between Win2k and FreeBSD.
  • Registry Redux (Score:5, Insightful)

    by Anonymous Coward on Wednesday March 13, 2002 @01:22PM (#3157306)
    That's what they said about the registry. It will solve all of the problems with ini files.

    But as everyone knows, with totally undisaplined usage of the registry, the registry is a nightmare. In some cases it is impossible to clean it up and the only solution is a reinstall.

    Ask any dba. Even with the most heavy duty industrial strength db, somebody can come up with a schema and application that will bring that db to its knees. Prepare for deja vu.
  • by binkley ( 25431 ) <binkley@alumni.rice.edu> on Wednesday March 13, 2002 @01:34PM (#3157397) Homepage
    Why is this marked "funny"? The author is perfectly serious, if perhaps using an amusing tone.

    One of Unix's greatest strengths is the widespread use of human-readable files.

    + Do man pages use bizarre binary formats for markup? No, some inscruitable codes are there, but the text of the man page remains.

    + Are configuration files kept in a common, fragile binary repository? No, they are stored as human-readable, editable and searchable files.

    Why does Microsoft want to change their filesystem? Well, they are now getting bitten in the butt for having binary file formats, and want to fix the problem by making yet more proprietary layers, locking out other search solutions.
  • Re:Predictions (Score:2, Insightful)

    by powerlinekid ( 442532 ) on Wednesday March 13, 2002 @01:37PM (#3157421)
    Technically hacking ntfs is illegal... its a proprietary fs backed I'm sure by some patents. I'd also like to point out that

    1)it doesn't matter if its illegal for alot of people... downloading mp3s is illegal, using decss based dvd players (i believe ogle???) is illegal, things like that. Once the code is out there, if someone has a use for it, they'll use it.

    2)I doubt this will be an issue. By the time longhorn comes out, things probably will be very different in the tech world as they always are for every new windows release. If the aol stuff happens, Microsofts control will slip alittle. If thats the case they won't be able to force upgrades as easy as they did with xp which means they won't get as much software tailored for the new fs, which again hurts the chances of people upgrading. Microsoft should stick to what it has, 20 years of backwards compatibility. If they really wanted to make a drastic change, they should of done away with the 9x line long long ago (like around 95) and stuck with nt and fixed that. Starting fresh is only going to hurt them, especially when their application base is their greatest avantage.
  • by e40 ( 448424 ) on Wednesday March 13, 2002 @01:39PM (#3157428) Journal
    Crikey, symlinks have been around for ages ('82?). How can MS say they have a modern FS without these?

    Don't tell me short cuts are equivalent to symlinks. They are a veneer on top of the OS. They are not transparent as symlinks are on UNIX to programs that don't know about symlinks.
  • by woozlewuzzle ( 532172 ) on Wednesday March 13, 2002 @01:44PM (#3157475)
    Agreed - instead of making a filesystem that can search their proprietary formats (.doc, .xls, etc) - why not make the formats more easily searched. Put all you numbers and text (what the heck is email , afterall, but text) into text files, use your resource fork (ok, stream) for all formatting code which doesn't need searching.

    Sounds like xml docs with formatting appendages (streams) would be a bit easier.

    I guess Bill would lose his Office monopoly tho, if that were the case.

    Nevermind

  • by killmenow ( 184444 ) on Wednesday March 13, 2002 @01:47PM (#3157496)
    But if it did that, then the DLLs would still be considered a part of the app (or suite of apps in this case). By embedding the functions needed in system DLLs that get loaded by the OS at boot because the OS itself needs some function of that DLL, you can claim the DLL is not part of the app, but part of the OS. Then, your APP can be smaller, load and run faster, and you lock your app into your OS.

    Genius.
  • by killmenow ( 184444 ) on Wednesday March 13, 2002 @01:54PM (#3157546)
    It didn't upgrade any OS DLLs because it didn't HAVE to. The CODE IT NEEDS IS ALREADY THERE.

    If Microsoft ships the OS with HALF of the OFFICE CODE already EMBEDDED in SYSTEM DLLs, you still can't USE OFFICE without the other HALF...which is what you installed when you loaded Office XP.

    That's why it LOADS faster, RUNS faster, and has SMALLER executables. The code for office, much like every other Microsoft product is being MIGRATED into the OS itself.

    The OFS initiative will EMBED SQL Server INTO the OS itself.

    Bye bye RDMBS competition.

    Got a browser competing with you, embed IE into the OS. Got Citrix competing with you, embed terminal services into the OS. Got Oracle competing with you, embed the DB.

    It's a proven successful tack and it makes sense.
  • Re:Predictions (Score:3, Insightful)

    by erasmus_ ( 119185 ) on Wednesday March 13, 2002 @02:25PM (#3157790)
    Nice scenario, but not quite accurate. Just because the file system would be database-type, does not mean that SQL Server is going to be built in. For a good example of how Microsoft currently handles low-end storage utilizing SQL Server technology, look at MSDE (MS Data Engine) [microsoft.com], included with Visual Studio and also available as a free download. It's a scaled-down version, and certainly does not have all the querying or tools of SQL Srv. But the storage mechanism is similar. Likewise here, SQL Server would probably utilize the system, but Windows would not come bundled with it in any sense. Therefore, a statement that Windows will compete with Oracle out of the box is not supported by facts.
  • Re:Metadata (Score:3, Insightful)

    by Wanker ( 17907 ) on Wednesday March 13, 2002 @02:31PM (#3157842)
    What's wrong with it? If your files are associated with metadata, you need to maintain it and programs need to deal with it. How should the metadata get copied? What do you do with it if the file is accessed through HTTP? Proponents always think this is merely an issue of standardization, but it isn't: it's an intrinsic problem with metadata.
    You are right that this is an intrinsic problem with metadata. In the GNOME link I had above they say:
    The biggest problem is database consistency. The problem here is that if a metadata-ignorant tool is used to manipulate the filesystem, then metadata information will be lost. For instance, in an implementation that associates a file name with the data in some separate database, a naive file rename will cause the metadata to be lost.
    Which is why near the top of their page they say:

    Implementing metadata is a tricky problem. I believe there is no way to do it perfectly without designing it into the entire operating system from the ground up.
    In other words, the only way to have metadata be reliable is if the operating system controls it to a degree where you cannot (under normal operation) manipulate the file data without manipulating the metadata. Clearly, abnormal operations like filesystem debuggers can get around these restrictions, but one could argue that people who do things like that create their own problems. (I.e. Macintosh file association editors can seriously break file associations. Go figure.)

    Microsoft is proposing just such a scheme-- they will control the filesystem. If applications access the filesystem through the proper API/system calls the OS can ensure that the file metadata will be kept in sync. (I.e. they will have required arguments to the API to provide input for the metadata.)

    To take one of your questions as an example:

    What do you do with it if the file is accessed through HTTP?
    Suppose there is a metadata field for "last accessed time". The HTTP server opens the file for reading using a fictional Windows system call called "open(*FILE)". Windows then internally updates the "last accessed time" metadata field and opens the file for reading. In this simple case the OS does all the work.

    Suppose there is a new metadata field for "last accessed from ". In this case the "open(*FILE)" call would need a new parameter (or some other way to pass in metadata like "open_new(*FILE, *metadata_struct)") so that the HTTP server can feed in the server that accessed the file. For backward compatibility the default might be the local server if the old system call/API is used.

    Of course, this is all still vaporware. We'll just have to see what really happens.

  • Re:Predictions (Score:1, Insightful)

    by Anonymous Coward on Wednesday March 13, 2002 @02:46PM (#3157967)
    Control H (indicated by ^H) is the same as a backsapce on certain terminals. What they are saying is that they typed something, then deleted it.

    I remember the old BBS days where you could type something, and then backspace over it, and it would save everything you typed, the original, the backspaces, and the new text. So as you read, a word would slowly disappear, and then change.

    This was sort of bad if you typed something you really didn't want people to see, but funny if you used it to call someone a name in jest, and cover it up with somthing more tactful.

    So when you see ^H, just think they are backspacing over what they just typed. That's the joke. If you are used to using terminals, or old software, then you'd understand a little better.
  • by jd142 ( 129673 ) on Wednesday March 13, 2002 @02:53PM (#3158025) Homepage
    Another unverified, just my personal experience, YMMV tip is removing any files from your desktop. If you store files there, especially large ones, it will slow windows down. Even a large number of shortcuts can have an effect.

    Yes, I know you are not supposed to store things on the desktop, but windows makes it far too easy to do so. Plus it has the advantage that once you have downloaded a file, you can see it immediately without having to navigate to the right directory.
  • by Anonymous Coward on Wednesday March 13, 2002 @03:16PM (#3158241)
    Amen to that.

    Unfortunately most of the comments here are dissing on MS, or saying "use ASCII!" or some other lame cop-out. Yes, MS has an illegal monopoly, and yes they are horrible, yadda yadda yadda. That's not the point. We're so busy talking crap that no one stops to see the good ideas here, and how we could benefit on Linux.

    The point (I think) is that this is useful. This is like Be's filesystem. I would love to have that under Linux. Or even, to merely have OS/2's extended attributes under Linux would be wonderful. THAT is what we should be discussing. Everything else is just noise.

    I'm working on my own Linux-based OS (who isn't? *sigh* I try to keep it in perspective, though; I'm just doing this for me and family for now). I have no sacred cows, with regards to designing an OS. I would love to support a database-style filesystem. I would love to have metadata supported in the filesystem. I would love to have the entire user interface and all applications take advantage of this. But how? As a first step, could ReiserFS take plugins to allow metadata to be attached to files? Could Qt be extended to interface to that? Is there any general format that existing tools (tar) could use to not lose this metadata? As an example of how this could be useful, I take lots of pictures with a digital camera. I would like to annotate the images somehow. Embedding those annotations in HTML doesn't work because the annotations are lost when, for example, the image is emailed. (Yes, JPEG has a comment field, but what about the hundreds of other file formats out there that don't?) Think like a computer scientist--this is generally useful, so it should be generally available. Abstract it out; think of the jpeg comment field as a single hack; I want the full elegant solution. It should be in the filesystem.

    MS's vision is larger than what I've said here, but even this would be a huge leap forward for Linux. I would like to see it happen.

    I could go on and on... But most people would rather bitch about MS, rather than thinking how we can improve a free operating system. *Sigh*.

    (If anyone wants to post more ideas rather than anti-MS rants, post below. Maybe we should get together on this.)

    -Charles
  • by The Cat ( 19816 ) on Wednesday March 13, 2002 @03:34PM (#3158380)
    For every "denied" message you get as an admin, chances are you can give yourself access to do this.

    An administrator (sysadmin, root, whatever) should never be denied access to anything ever, no way, no how, zip, zilch, nada.

    Without full access to the machine, and every resource on it, it is impossible to properly administrate it.

    "Permission Denied" shouldn't even exist for the administrator account.
  • by gosand ( 234100 ) on Wednesday March 13, 2002 @03:55PM (#3158524)
    Exactly how is this a risk to M$? If this new FS doesn't fly, do you think they are in trouble as a company? Ha. Not bloody likely. That isn't taking a risk at all. They could simply up their subscription fees by $1 and make up their losses.

    I wonder how many R&D projects get canned in MS. Probably a whole lot. And considering building a new FS is nowhere near innovative. Are you suggesting that MS thought up the idea of this new type of FS? Puhleeze.

  • by F.Prefect ( 98101 ) on Wednesday March 13, 2002 @03:59PM (#3158550) Homepage
    Oh please. +1 Insightful? How about -1 FUD-riddled Karma Whoring. Samba has nothing to do with the filesystem. It deals with the Server Message Block network protocol. The filesystem being run on the remote sytem is irrelevant to the operation of Samba.

    Now if we were talking about Microsoft coming out with a new obfuscated replacement for SMB (which is an evil hack and needs to be replaced with something less thoroughly bletcherous anyway), then you might have a point. But we're not talking about that at all.

  • by ethereal ( 13958 ) on Wednesday March 13, 2002 @03:59PM (#3158557) Journal

    Nothing's a risk when you have $billions in the bank. Do you realize how long Microsoft could coast at this point if they completely stopped doing any work at all?

  • by Sanity ( 1431 ) on Wednesday March 13, 2002 @04:03PM (#3158591) Homepage Journal
    This is the weakness of de-centralized development
    I don't think it is a fundamental weakness of the Open Source model, I just think that Open Source developers feel that their mission is to re-implement everything as Open Source, but not so-much to actually forge new ground. It is a cultural problem, but isn't inevitable.

    It is possible, there are examples of Open Source projects which really do innovative new things, but they are quite rare. Part of the reason they are so rare is that a developer needs a thick skin to not be disheartened by the countless numbers of people around the O.S community who would rather nit-pick other people's efforts than contribute themselves.

  • uhhhhhh (Score:0, Insightful)

    by j0nkatz ( 315168 ) <anonNO@SPAMmemphisgeek.com> on Wednesday March 13, 2002 @04:15PM (#3158713) Homepage
    If you had a NT box then why would you need a Lunix box anyhow?
  • by Anonymous Coward on Wednesday March 13, 2002 @05:11PM (#3159154)
    Wow, you're really an idiot.
  • Baby, Bath water (Score:3, Insightful)

    by RovingSlug ( 26517 ) on Wednesday March 13, 2002 @05:21PM (#3159224)
    Once all your data is in a common data store and can be manipulated as such, it opens up a world of new possibilities. ... we've been conditioned and trained to think of data storage in terms of files... I could see someone emailing me a project. Not some word documents, an excel spreadsheet, and a database zipped into a ZIP file; they just email me the project.

    Don't throw the baby out with the bath water.

    A "file" expresses a fundamentally useful idea: a clear demarcation of data that lives independent of the host filesystem. Once you start tieing and interweaving data tightly with the host filesystem, how do you export it without a significant, altering transformation?

    That is, when someone "just emailed you the project", what did you get? How much of the filesystem did or didn't come along with it? Have we openned the door for Version Hell? Also, can the data be compressed without having to know that it is?

    Let's just be careful to clearly define what we want and how we get it.

    A "file" lets us abstract the data from the filesystem. It is then trivial for that data to live on Ext2, Ext3, FAT16, NTFS, Juliet, in a zip, in a tar, as an email attachment, or in a pipe to an arbitrary process.

    With a "common data storage", it sounds like what is really wanted is for each "object" to emit a standard, common interface. Once everything has that interface, we can wrap a database system around it to transform the data in lots of unique, interesting ways. Is there something implicit about this new abstraction that it has to live in the filesystem instead of on it (Is-A versus Has-A inheritence)? Does it require that we to throw out other, existing, useful abstractions ("files") to get it?

    It sounds like an equivalent solution is to encapsulate each file in a platform independent, self-describing data structure. Then, impose the database query system on top of that. That both maintains the separation between file and filesystem provides all the features of the "common data storage".

  • by Reziac ( 43301 ) on Thursday March 14, 2002 @12:49AM (#3161098) Homepage Journal
    Instead of relying on the extension to define file type, why not just read the file type from the header (or other clues) like some DOS utilities have been able to do for over a decade? Hell, I can do that much with my own eyeballs and a hex viewer.

    But the idea of the filesystem and files being basically one beast -- that's scary. Reminds me of a compressed volume file a la DoubleSpace/Stacker.

    I'd expect the minimal effect would be a complete lack of access to my data except thru "approved" channels.

    Worst case, when the OS does go titsup, my data goes with it (a la DoubleSpace), rather than being left in messy but recoverable files per older filesystems.

    This may be great for big databases (billions of records and up) but I'll stick to discretely accessable files for my own systems, thank you.

  • by BakaMark ( 531548 ) <markl.netluminous@com@au> on Thursday March 14, 2002 @01:11AM (#3161180) Homepage
    It's not that I think basing a filesystem on a database is a great idea. For one thing, it's a pretty good bet that performance is going to suck because of all the extra DB-related overhead.

    The possibility of the system breaking has also increased. Journalling, etc will only get you so far, and it has taken companies such as Microsoft and Oracle years to try and get that one nailed down.

    There has been the means to provide index searching within Microsoft products on individual files since Option Pack 4 for Windows NT 4 came out. However it was not the best of index search engines, and there were a ton of problems in regards to maintaining the integrity of the index then. The product was not using SQL as it's database, and it was a real pain to try and make it interact with SQL (had to apply SP x, and then hot fixes, so you would not kill your database with the overload).

    One of the issues then, was the ability to search on other file types such as PDF, etc. This was a right royal pain to setup. It is probably no easier now.

    The reason for redevelopment of the applications is necessary to that you can have the new "search type fields" in your documents. However this ability exists now (Windows 2000), but the indexing capability is not the best, and still based upon the old system. To make matters worse, the indexing application has to do all of the interpertation itself (by calling a supplied filter DLL).

    I guess the real important question is if the thing can be turned on or off, because not every installation of the OS will actually require a feature such as this, and the overheads will be sizeable.

    We are slowly going down the path where we have so many features, bells and whistles that we end up confusing the poor users trying to use the damn thing.

  • by ricardo2c ( 561838 ) on Thursday March 14, 2002 @01:12AM (#3161186)
    Like I said, maybe.
    But I'll answser to you, so we can keep the discussion going...
    Think of today, not 100 years back. Think outside the academic environment. Think of an innovative idea. Doesn't the common sense tell you "sell it" or "make profit"? Where did all of companies come from? At&T, IBM, Apple, MS, (your list here)...
    ONLY IF I didn't have the guts to carry on my ideas, or knew I couldn't do it without being smashed by the big guys... only then I wouldn't make a profit of it?
    Imagine that... getting paid to do stuff I like! (I suppose that if you had an innovative idea, you actually do like the subject)
    Whoa, Nelly!
  • by Anonymous Coward on Thursday March 14, 2002 @07:42AM (#3161811)
    And 30 years ago the bleeding-edge filesystems (non-UNIX) did have metadata and resource forks and other whizbang ideas. Guess what? They sucked.
    Yeah, if the first implementations are flawed the whole concept must be flawed. Edison should have chucked in the towel when his first few filaments burned out.
    You can't pipe a multistream file.
    However when your base class (and so all of the descendants) has writeToStream() and readFromStream() methods then it's not really an issue is it.
    You can't send it over the network. You can't dump it to tape.
    This article is an excellent example of being stuck 'thinking inside the [unix] box'.

    The idea that what was best for mainframes in the 70's is best for personal computers today is... well it's right up there with '640k should be enough for anybody'.
  • by timetool ( 455763 ) on Thursday March 14, 2002 @02:19PM (#3163390)
    Re: "ACLs are one thing that should be prevalent on new filesystem designs."

    Before Microsoft puts too much time into a new file system, I'd like to see it make full use of the existing NTFS one -- especially the ACLs. I'd like the OS and program files to be in an area that cannot be written to by anybody on the outside, or even by myself unless I'm logged on with a privileged account -- to eliminate any possibility of upgrades/viruses and other stuff getting installed over the net or from e-mails, etc.

Scientists will study your brain to learn more about your distant cousin, Man.

Working...