I don't why so many in the Linux community are so hooked on ZFS. BTRFS has a feature set that is rapidly getting there, its becoming more a more mature in terms of code that is already in the upstream.
It's really quite simple. ZFS is a great filesystem. It's reliable, performant, featureful, and very well documented. Btrfs has a subset of the ZFS featureset, but fails on all the other counts. It has terrible documentation and it's one of the least reliable and least performant filesystems I've ever used. Having used both extensively over several years, and hammered both over long periods, I've suffered from repeated Btrfs dataloss and performance problems. ZFS on the other hand has worked well from day one, and I've yet to experience any problems. Neither are as fast as ext4 on single discs, but you're getting resilience and reliability, not raw speed, and it scales well as you add more discs; exactly what I want for storing my data. And having a filesystem which works on several operating systems has a lot of value. I took the discs comprising a ZFS zpool mirror from my Linux system and slotted them into a FreeBSD NAS. One command to import the pool (zpool import) and it was all going. Later on I added l2arc and zil (cache and log) SSDs to make it faster, both one command to add and also entirely trouble-free.
Over the years there have been lots of publicity about the Btrfs featureset and development. But as you said in your comment that it's "rapidly getting there". That's been the story since day one. And it's not got there. Not even close. Until its major bugs and unfortunate design flaws (getting unbalanced to unusability, silly link limits) are fixed, it will never get there. I had high hopes for Btrfs, and I was rewarded with severe dataloss or complete unusability each and every time I tried it over the years since it was started. Eventually I switched to ZFS out of a need for something that actually worked and could be relied upon. Maybe it will eventually become suitable for serious production use, but I lost hope of that a good while back.
I mean the performance gains as you add more discs.
And regarding adding discs to an array, you certainly can. Just add addtional raid sets to the pool. That is, rather than adding discs to the existing array, you scale it up by adding additional arrays to the same pool. See the documentation. [oracle.com]
What do you mean it scales as you add more disks? You can't add disks to a ZFS array. You can replace them with bigger disks, but not just add them.
Wrong. You can't scale a pool by adding VDEVs to the existing pool, but you can expand without practical limit by generating VDEVs out of VDEVs and adding them. E.g., if you have 6 drives in a RAIDZ2, you can build a RAIDZ2 out of 6 (or some other number of) RAIDZ2s, including your original and 5 new ones. The downtime when you switch over to your expanded confi
--You can definitely add more disks if you are using mirrored drives in your pool, instead of RAIDZ. I created a Linux ZFS RAID0 (no redundancy) pool with 2 brand-new drives initially, then bought 2 more drives of the same brand and capacity a month later, and upgraded the pool in-place with no downtime to a zRAID10.
--If I want to expand the size of the pool, I can just add 2 more disks in a mirrored configuration.
BTRFS is getting there (Score:5, Insightful)
I don't why so many in the Linux community are so hooked on ZFS. BTRFS has a feature set that is rapidly getting there, its becoming more a more mature in terms of code that is already in the upstream.
Why not just put your energy there?
Re:BTRFS is getting there (Score:5, Informative)
It's really quite simple. ZFS is a great filesystem. It's reliable, performant, featureful, and very well documented. Btrfs has a subset of the ZFS featureset, but fails on all the other counts. It has terrible documentation and it's one of the least reliable and least performant filesystems I've ever used. Having used both extensively over several years, and hammered both over long periods, I've suffered from repeated Btrfs dataloss and performance problems. ZFS on the other hand has worked well from day one, and I've yet to experience any problems. Neither are as fast as ext4 on single discs, but you're getting resilience and reliability, not raw speed, and it scales well as you add more discs; exactly what I want for storing my data. And having a filesystem which works on several operating systems has a lot of value. I took the discs comprising a ZFS zpool mirror from my Linux system and slotted them into a FreeBSD NAS. One command to import the pool (zpool import) and it was all going. Later on I added l2arc and zil (cache and log) SSDs to make it faster, both one command to add and also entirely trouble-free.
Over the years there have been lots of publicity about the Btrfs featureset and development. But as you said in your comment that it's "rapidly getting there". That's been the story since day one. And it's not got there. Not even close. Until its major bugs and unfortunate design flaws (getting unbalanced to unusability, silly link limits) are fixed, it will never get there. I had high hopes for Btrfs, and I was rewarded with severe dataloss or complete unusability each and every time I tried it over the years since it was started. Eventually I switched to ZFS out of a need for something that actually worked and could be relied upon. Maybe it will eventually become suitable for serious production use, but I lost hope of that a good while back.
Re: (Score:1)
What do you mean it scales as you add more disks? You can't add disks to a ZFS array. You can replace them with bigger disks, but not just add them.
Re:BTRFS is getting there (Score:4, Informative)
I mean the performance gains as you add more discs.
And regarding adding discs to an array, you certainly can. Just add addtional raid sets to the pool. That is, rather than adding discs to the existing array, you scale it up by adding additional arrays to the same pool. See the documentation. [oracle.com]
Re: (Score:2)
Wrong. You can't scale a pool by adding VDEVs to the existing pool, but you can expand without practical limit by generating VDEVs out of VDEVs and adding them. E.g., if you have 6 drives in a RAIDZ2, you can build a RAIDZ2 out of 6 (or some other number of) RAIDZ2s, including your original and 5 new ones. The downtime when you switch over to your expanded confi
Re: (Score:2)
Re: BTRFS is getting there (Score:2)
You can attach and detach block devices from vdevs at will. You can't remove top level vdevs. You can add vdevs. I've done it many times
Re: (Score:2)
--You can definitely add more disks if you are using mirrored drives in your pool, instead of RAIDZ. I created a Linux ZFS RAID0 (no redundancy) pool with 2 brand-new drives initially, then bought 2 more drives of the same brand and capacity a month later, and upgraded the pool in-place with no downtime to a zRAID10.
--If I want to expand the size of the pool, I can just add 2 more disks in a mirrored configuration.
# zpool add mirpool mirror ata-ST9500420AS_5VJDN5KL ata-ST9500420AS_5VJDN5KJ
--Note that this s