Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Ubuntu Linux

Ubuntu Plans To Make ZFS File-System Support Standard On Linux 279

An anonymous reader writes: Canonical's Mark Shuttleworth revealed today that they're planning to make ZFS standard on Ubuntu. They are planning to include ZFS file-system as "standard in due course," but no details were revealed beyond that. However, ZFS On Linux contributor Richard Yao has said they do plan on including it in their kernel for 16.04 LTS and the GPL vs. CDDL license worries aren't actually a problem. Many Linux users have been wanting ZFS on Linux, but aside from the out of tree module there hasn't been any luck in including it in the mainline kernel or with tier-one Linux distributions due to license differences.
This discussion has been archived. No new comments can be posted.

Ubuntu Plans To Make ZFS File-System Support Standard On Linux

Comments Filter:
  • What he didn't say (Score:4, Informative)

    by n1ywb ( 555767 ) on Tuesday October 06, 2015 @05:50PM (#50674297) Homepage Journal
    is anything like "ZFS will be the default". He just said that it would be in the distro.
  • by DarkOx ( 621550 ) on Tuesday October 06, 2015 @05:51PM (#50674311) Journal

    I don't why so many in the Linux community are so hooked on ZFS. BTRFS has a feature set that is rapidly getting there, its becoming more a more mature in terms of code that is already in the upstream.

    Why not just put your energy there?

    • by Phil Urich ( 841393 ) on Tuesday October 06, 2015 @06:01PM (#50674387) Journal
      Hell, it's already in many cases a superior experience on Linux, starting with that you can shrink a BTFS volume but you still can't shrink a ZFS volume. I suppose in the enterprise-centric world that ZFS is aimed at that's pretty much never an issue, but I've even run into it personally multiple times myself working for a small business and have been glad that I was running BTRFS instead. Frankly, for many use-cases it seems like running ZFS on Linux is more hassle for the sake of then more hassle later on.
    • by Guspaz ( 556486 ) on Tuesday October 06, 2015 @06:12PM (#50674467)

      Because BTRFS is and has always been redundant? ZFS is far more mature, and stories abound of BTRFS failing on people. BTRFS is still unstable, particularly their RAID5/6 support. Developers should be putting their efforts into ZFS instead of BTRFS.

      • Just because ZFS is mature on Solaris doesn't make it mature on Linux. Out of tree modules always suck in the long run. I agree that there is more future in BTRFS because of that. Unless they finally release ZFS under the GPL.

        • XFS is mature on Linux. Just need to add snapshots and things back in. I worked for a start-up that did just that, and then we never got to the point where we could release our XFS snapshot and replication support. Pity.

        • by ArmoredDragon ( 3450605 ) on Tuesday October 06, 2015 @06:40PM (#50674723)

          I've been using ZFS on Linux for about 3.5 years now, it's been pretty stable. I can't say I've heard of a case of it failing for somebody other than user error.

          • by ZorinLynx ( 31751 ) on Tuesday October 06, 2015 @08:14PM (#50675419) Homepage

            Likewise; we use it all over the place in our department. We have a bunch of 96TB/80TB usable ZFS file servers based on 24 4TB SATA drives. The performance is amazing for the price and they are rock solid under all kinds of heavy load, except for one tiny bug we hit recently that has been fixed already.

        • by Guspaz ( 556486 ) on Tuesday October 06, 2015 @06:42PM (#50674741)

          zfsonlinux hit both unstable and stable releases on Linux earlier than btrfs: if your only definition of stable is how long it's been around on Linux, then btrfs is still less mature.

          Being in-tree says nothing about the stability of a module, but ZFS doesn't need to be under the GPL to be in Linus' tree: the GPL does not forbid code aggregation. That said, neither Linus nor the ZoL team want ZoL in Linus' tree.

          • >neither Linus nor the ZoL team want ZoL in Linus' tree.

            I bang my head on my desk frequently over Linus' stubborn nature, then I realize it's that same stubborn nature that makes Linux as great as it is, so I forgive him.

            If ZFS were part of the kernel, bugfixes and updates would have to follow the Linux kernel release schedule, which would make it a huge hassle to update the code on running systems without building custom kernels.

            Building custom kernels is something you shouldn't be doing in a production

      • There is no problem with Raid 5 on Btrfs. I've been running a raid 5 on btrfs for more than a year. I'm not sure why ZFS fanboi's always resort to lying about the btrfs features and support, if it's just laziness on their part or they are astroturfing.

        ZFS may be stable, but it development is relatively frozen and btrfs is in many ways better but has lagged in development. This is probably because Oracle was the primary backer of btrfs before they bought Sun and there just aren't enough developers working on

        • by Guspaz ( 556486 )

          In what way is development on ZFS frozen? Development is extremely active, probably far more so than btrfs.

        • Re: (Score:2, Interesting)

          by Anonymous Coward

          13.1% of the code changed between 0.6.4 and 0.6.5:

          http://fossies.org/diffs/zfs/0.6.4_vs_0.6.5/index.html

          That is far from being frozen. Even Linux does not have that percentage of its code change between releases:

          http://fossies.org/diffs/linux/4.2_vs_4.3-rc1/

          It would be interesting if someone checked out much fs/btrfs changes between releases.

        • by Bengie ( 1121981 ) on Tuesday October 06, 2015 @08:30PM (#50675531)
          Nearly all of the original Sun devs that created ZFS in the first place, still work on OpenZFS full time and are paid to do so. OpenZFS is very actively developed. They have 2 or more presentations per year about all of the changes they're constantly making and some of the upcoming big changes. Currently they are focusing on standardizing ZFS between FreeBSD, Luminos, and Linux. It's a large refactoring effort to have all ZFS's code bases to live in the same tree. One OpenZFS code tree for all OSes. Everyone will be in sync.

          While you can't shrink ZFS pools because they cannot do that atomically, and they refuse to do anything that allows the end user to shoot themselves in the foot, like leaving the FS in an inconsistent state, you can create a new pool that is smaller and import your larger pool into the smaller one, as long as it fits. Can't do it in-place, but you can do it. It just sucks to do that with a 1PiB+ pool. But who shrinks those?
      • by Bengie ( 1121981 )
        As much as I love ZFS over BTRFS, monoculture is bad. If anything, BTRFS is a learning experience for the entire community, but I do think ZFS needs first class support.
      • by Jezral ( 449476 )

        I used to use ZFS on my hacky home backup solution (Linux in VirtualBox with USB storage - yes, I know), but it would corrupt the disks once per month or so. Switched to btrfs, and it just works.

        Features that btrfs has over ZFS, and I use:
        - Mutable snapshots. It is infuriating that ZFS's snapshots are immutable. Mind you, I very rarely modify snapshots, but I damn well want to be able to without having to dump+reload all data. This alone is reason enough that I'll never again use ZFS where btrfs is availabl

    • by TrekkieGod ( 627867 ) on Tuesday October 06, 2015 @06:13PM (#50674471) Homepage Journal

      I don't why so many in the Linux community are so hooked on ZFS. BTRFS has a feature set that is rapidly getting there, its becoming more a more mature in terms of code that is already in the upstream.

      Why not just put your energy there?

      As someone who uses both zfs (for file server storage) and btrfs (for the OS), my reason for using zfs is raidz. If btrfs implemented something similar, I'd drop zfs.

      • Raid support has been built into btrfs for ages now. It's been rock stable for me for over a year with an 8 disk array in raid 5 configuration. But the simplicity with which you can add and subtract drives, parity drives and others makes btrfs a total winner IMO.

        https://btrfs.wiki.kernel.org/... [kernel.org]

        • by rl117 ( 110595 ) <{ten.erbiledoc} {ta} {hgielr}> on Tuesday October 06, 2015 @06:59PM (#50674881) Homepage

          It can certainly work when everything is working correctly. Have you tested its behaviour when things don't work correctly, for example by pulling the cable on one of the discs as it's running? Does it carry on running, does it transparently recover when you plug it back in? When I had a cable become unseated and the connection glitched, Btrfs happily toasted the data on the drive, and its mirror, and panicked the kernel whenever the discs were plugged in; I had to zero them out on another system before I could even try to reformat them. One of the major historical weak points has been that the failure codepaths were poorly tested, and this can come to bite you quite badly.

          • by ZorinLynx ( 31751 ) on Tuesday October 06, 2015 @08:21PM (#50675473) Homepage

            I recently "fixed" one of our ZFS fileservers at work which was performing very poorly by *removing* a failing drive. The drive was taking a few seconds to read blocks, obviously dying, so it was slowing down the entire system. As soon as I pulled it ZFS finally declared it dead and the filesystem was running at full performance again.

            I felt so confident being able to just walk up and yank the troublesome drive; that's how much trust I've built in ZFS. It's incredibly stable and fault tolerant.

            • by Fweeky ( 41046 )

              This is one of the things the Solaris-derived versions have tended to be better at handling - ZFS expects failing drives to be detected/managed by an external fault management service (fmd) which doesn't exist on other OS's. ZFS itself doesn't mark a drive as bad itself unless it outright disappears from the system.

        • by fnj ( 64210 )

          Raid support has been built into btrfs for ages now.

          The commenter you are replying to plainly said RAIDZ. If you don't know how vastly superior RAIDZ is to RAID, you'd be better off not making your unwitting lack of knowledge so obvious.

          • I know fully well what raidz is and it's not significantly different than the raid in btrfs. Maybe you should understand what they are before you make a comment simply because one has a z on the end.

            • Re: (Score:3, Informative)

              by fnj ( 64210 )

              I don't know what your definition of "significant" is, but the BTRFS wiki [kernel.org] says "The one missing piece, from a reliability point of view, is that it is still vulnerable to the parity RAID 'write hole', where a partial write as a result of a power failure may result in inconsistent parity data." ZFS RAIDZ is expressly free from the write hole. That is very significant to me.

              RAIDZ's write hole advantage is a product of three specifics: (1) RAID5 has n data disks plus one dedicated parity-only disk; ZFS distrib

              • by rahvin112 ( 446269 ) on Tuesday October 06, 2015 @09:32PM (#50675931)

                The features you list as "specific" to zfs exist in btrfs. btrfs can have dedicated parity drives or you can spread the data and parity across multiple drives in any order or pattern you would like.

                The write hole in btrfs is AFIAK also present in zfs and listed as a risk of a power failure during write on a raid pool with COW filesystems. This risk is that loss of power during write can result in multiple different parity blocks for the same data and that in such an instance the filesystem cannot identify the correct data or parity (depending on the order you write them) and there are only a few solutions to this that involve resorting to a known good (older) copy and result in lost data (from the write).

                IIRC this is a listed risk in the FAQ for ZFS. Just as the same write hole risk exists in btrfs. Also IIRC ZFS takes the path of writing parity before data such that it will lose new data rather than risk a corruption of existing parity blocks. Whereas, again IIRC btrfs COW's the new data then COW's the parity block which risks inconsistent parity but at less risk of data loss (as parity can be recomputed).

                Two different solutions to the same problem that is intrinsic to COW filesystems with parity data. Neither is particularly better IMO as both run the risk of data loss in an extreme event. Though such events are rare.

        • by Bengie ( 1121981 )
          Don't use RAID5. When one drive dies, there is a very good chance another drive will die, even if the that drive is a different model or brand.
          • There are people that argue you shouldn't use raid at all unless it's 10. Raid isn't a backup solution. It's a performance and reliability solution. If you need data backup you should be using real backups, not raid.

          • "Don't use RAID5. When one drive dies, there is a very good chance another drive will die, even if the that drive is a different model or brand."

            True, but "very good chance" is still less than 100%.

            I had a lot of systems with RAID5 and so, I lost filesystems due to a second drive (and even a third) dying before recovering from the previous one but also a majority that didn't so, all in all, RAID5 showed its value.

            Maybe you are one of those that think RAID means "backup" instead of "higher MTBF".

          • by delt0r ( 999393 )
            We have had this happen more than once. Basically trashing the drives while it is recovering, forces/finds trips more fail states or something. However the worst every i had to deal with, was a RAID setup for a apple timemachine backup. It was unrecoverable at every level because OSX did something stupid. Why anyone would use a mac as a server i will never know.
    • by rl117 ( 110595 ) <{ten.erbiledoc} {ta} {hgielr}> on Tuesday October 06, 2015 @06:29PM (#50674623) Homepage

      It's really quite simple. ZFS is a great filesystem. It's reliable, performant, featureful, and very well documented. Btrfs has a subset of the ZFS featureset, but fails on all the other counts. It has terrible documentation and it's one of the least reliable and least performant filesystems I've ever used. Having used both extensively over several years, and hammered both over long periods, I've suffered from repeated Btrfs dataloss and performance problems. ZFS on the other hand has worked well from day one, and I've yet to experience any problems. Neither are as fast as ext4 on single discs, but you're getting resilience and reliability, not raw speed, and it scales well as you add more discs; exactly what I want for storing my data. And having a filesystem which works on several operating systems has a lot of value. I took the discs comprising a ZFS zpool mirror from my Linux system and slotted them into a FreeBSD NAS. One command to import the pool (zpool import) and it was all going. Later on I added l2arc and zil (cache and log) SSDs to make it faster, both one command to add and also entirely trouble-free.

      Over the years there have been lots of publicity about the Btrfs featureset and development. But as you said in your comment that it's "rapidly getting there". That's been the story since day one. And it's not got there. Not even close. Until its major bugs and unfortunate design flaws (getting unbalanced to unusability, silly link limits) are fixed, it will never get there. I had high hopes for Btrfs, and I was rewarded with severe dataloss or complete unusability each and every time I tried it over the years since it was started. Eventually I switched to ZFS out of a need for something that actually worked and could be relied upon. Maybe it will eventually become suitable for serious production use, but I lost hope of that a good while back.

    • I don't why so many in the Linux community are so hooked on ZFS. BTRFS has a feature set that is rapidly getting there, its becoming more a more mature in terms of code that is already in the upstream.

      Why not just put your energy there?

      The fact is that 99% of the users couldn't care less about ZFS or BTRFS. Ext4 is just fine, and ext3 was also fine before, for 99% of the use cases. Hence, most people will just stick to their default FS.
      Likewise, most Windows users are fine with NTFS, and wouldn't switch to ZFS even if it became available on Windows.

    • by cas2000 ( 148703 ) on Tuesday October 06, 2015 @06:51PM (#50674825)

      Because there's really no comparison between btrfs and ZFS. ZFS is years ahead in both stability and features. Only someone who's never used both would say that they are in any way close.

      The only really useful thing that btrfs does that ZFS does not is rebalancing - that's a great feature and i'd love to see it in ZFS (but it will probably never get there).

      ZFS has lots of features that btrfs doesn't have and likely never will.

      • by Bengie ( 1121981 )
        They recently said that pointer-rewrite, which is required for re-balancing, will not happen. They have looked at the issue for a few years now and cannot figure out a safe way to make it work that wouldn't open up a window for dataloss. The only way to rebalance or shrink is to make a new pool on another set of drives, and import the existing pool.
      • Device removal on ZFS may be a thing [delphix.com], and may not require block pointer rewrite; it's the latter that is probably not going to happen.

    • by mi ( 197448 )

      I don't why so many in the Linux community are so hooked on ZFS.

      Because it is good. In particular, it offers the only sensible way to make good use of the ephemeral storage offered by Amazon's Web Services (AWS) in a general case — the fast (SSD) storage can be used as read-cache for a ZFS stable of mount-points.

      Why not just put your energy there?

      Why do put any energy into reinventing the wheel? And struggle with triangular "wheels" in the process?

    • Re: (Score:3, Interesting)

      by Bengie ( 1121981 )
      BTRFS is the SystemD of filesystems. Lots of features, poor design. Features can be great, but they come at a cost. To summarize the issues with BTRFS, is it violates the principle of least surprise, which can result in some completely unexpected gotchas. The other thing is it is not truly transactional/atomic. By design, it requires fsck, which means the filesystem can be left in an inconsistent state. This opens the doors for a host of issues that ZFS is guaranteed to never have.

      Not to mention, there ar
    • I don't why so many in the Linux community are so hooked on ZFS. BTRFS has a feature set that is rapidly getting there,

      I think you already explained it in that first sentence... ZFS has been stable, reliable, and successfully managing huge amounts of data for the past decade (2005). BTRFS is still unstable, not remotely a suitable alternative for ZFS, with only the vague promise of maybe eventually "getting there".

    • by mcrbids ( 148650 )

      5 years ago, it seemed that BTRFS was rapidly getting there, and its inclusion into the kernel made it feel like a rather sure bet!

      (crickets)

      5 years later, BTRFS is still "rapidly" getting there. I've tried it numerous times and had horrible data loss events literally every single time, and this as recently as a month ago.

      Meanwhile, we're using ZFS on Linux in a complex production environment in a worst-case mixed read/write use case and it's been absolutely rock solid bullet proof, demonstrably more stable

  • by Bruce Perens ( 3872 ) <bruce@perens.com> on Wednesday October 07, 2015 @01:39AM (#50676937) Homepage Journal

    Regardless of what Ubuntu has convinced themselves of, in this context the ZFS filesystem driver would be an unlicensed derivative work. If they don't want it to be so, it needs to be in user-mode instead of loaded into the kernel address space and using unexported APIs of the kernel.

    A lot of people try to deceive themselves (and you) that they can do silly things, like putting an API between software under two licenses, and that such an API becomes a "computer condom" that protects you from the GPL. This rationale was never true and was overturned by the court in the appeal of Oracle v. Google.

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...