Ubuntu Plans To Make ZFS File-System Support Standard On Linux 279
An anonymous reader writes: Canonical's Mark Shuttleworth revealed today that they're planning to make ZFS standard on Ubuntu. They are planning to include ZFS file-system as "standard in due course," but no details were revealed beyond that. However, ZFS On Linux contributor Richard Yao has said they do plan on including it in their kernel for 16.04 LTS and the GPL vs. CDDL license worries aren't actually a problem. Many Linux users have been wanting ZFS on Linux, but aside from the out of tree module there hasn't been any luck in including it in the mainline kernel or with tier-one Linux distributions due to license differences.
What he didn't say (Score:4, Informative)
Re: (Score:3)
BTRFS is getting there (Score:5, Insightful)
I don't why so many in the Linux community are so hooked on ZFS. BTRFS has a feature set that is rapidly getting there, its becoming more a more mature in terms of code that is already in the upstream.
Why not just put your energy there?
Re:BTRFS is getting there (Score:5, Interesting)
Re:BTRFS is getting there (Score:5, Interesting)
Because BTRFS is and has always been redundant? ZFS is far more mature, and stories abound of BTRFS failing on people. BTRFS is still unstable, particularly their RAID5/6 support. Developers should be putting their efforts into ZFS instead of BTRFS.
Re: (Score:2)
Just because ZFS is mature on Solaris doesn't make it mature on Linux. Out of tree modules always suck in the long run. I agree that there is more future in BTRFS because of that. Unless they finally release ZFS under the GPL.
Re: (Score:2)
XFS is mature on Linux. Just need to add snapshots and things back in. I worked for a start-up that did just that, and then we never got to the point where we could release our XFS snapshot and replication support. Pity.
Re:BTRFS is getting there (Score:5, Interesting)
I've been using ZFS on Linux for about 3.5 years now, it's been pretty stable. I can't say I've heard of a case of it failing for somebody other than user error.
Re:BTRFS is getting there (Score:5, Interesting)
Likewise; we use it all over the place in our department. We have a bunch of 96TB/80TB usable ZFS file servers based on 24 4TB SATA drives. The performance is amazing for the price and they are rock solid under all kinds of heavy load, except for one tiny bug we hit recently that has been fixed already.
Re:BTRFS is getting there (Score:5, Informative)
zfsonlinux hit both unstable and stable releases on Linux earlier than btrfs: if your only definition of stable is how long it's been around on Linux, then btrfs is still less mature.
Being in-tree says nothing about the stability of a module, but ZFS doesn't need to be under the GPL to be in Linus' tree: the GPL does not forbid code aggregation. That said, neither Linus nor the ZoL team want ZoL in Linus' tree.
Re: (Score:3)
>neither Linus nor the ZoL team want ZoL in Linus' tree.
I bang my head on my desk frequently over Linus' stubborn nature, then I realize it's that same stubborn nature that makes Linux as great as it is, so I forgive him.
If ZFS were part of the kernel, bugfixes and updates would have to follow the Linux kernel release schedule, which would make it a huge hassle to update the code on running systems without building custom kernels.
Building custom kernels is something you shouldn't be doing in a production
Re: (Score:2)
and pre-load the database (most common way of getting data into the database initially) then hit it with an OLTP workload, ZFS will perform *terribly* - because it got large streaming writes up front, allocated huge stripe sizes, which makes rewrite performance go to hell in a handbasket,
L2ARC is supposed to fix this. I've heard good things about it.
Re: BTRFS is getting there (Score:2)
Running any database on RAID-5/6 or RAID-Z storage will suck. Better to use mirrored storage for random IO workloads.
Re: (Score:2)
There is no problem with Raid 5 on Btrfs. I've been running a raid 5 on btrfs for more than a year. I'm not sure why ZFS fanboi's always resort to lying about the btrfs features and support, if it's just laziness on their part or they are astroturfing.
ZFS may be stable, but it development is relatively frozen and btrfs is in many ways better but has lagged in development. This is probably because Oracle was the primary backer of btrfs before they bought Sun and there just aren't enough developers working on
Re: (Score:2)
In what way is development on ZFS frozen? Development is extremely active, probably far more so than btrfs.
Re: (Score:2, Interesting)
13.1% of the code changed between 0.6.4 and 0.6.5:
http://fossies.org/diffs/zfs/0.6.4_vs_0.6.5/index.html
That is far from being frozen. Even Linux does not have that percentage of its code change between releases:
http://fossies.org/diffs/linux/4.2_vs_4.3-rc1/
It would be interesting if someone checked out much fs/btrfs changes between releases.
Re:BTRFS is getting there (Score:5, Insightful)
While you can't shrink ZFS pools because they cannot do that atomically, and they refuse to do anything that allows the end user to shoot themselves in the foot, like leaving the FS in an inconsistent state, you can create a new pool that is smaller and import your larger pool into the smaller one, as long as it fits. Can't do it in-place, but you can do it. It just sucks to do that with a 1PiB+ pool. But who shrinks those?
Re: (Score:3)
Re: (Score:3)
I used to use ZFS on my hacky home backup solution (Linux in VirtualBox with USB storage - yes, I know), but it would corrupt the disks once per month or so. Switched to btrfs, and it just works.
Features that btrfs has over ZFS, and I use:
- Mutable snapshots. It is infuriating that ZFS's snapshots are immutable. Mind you, I very rarely modify snapshots, but I damn well want to be able to without having to dump+reload all data. This alone is reason enough that I'll never again use ZFS where btrfs is availabl
Re:BTRFS is getting there (Score:5, Interesting)
I don't why so many in the Linux community are so hooked on ZFS. BTRFS has a feature set that is rapidly getting there, its becoming more a more mature in terms of code that is already in the upstream.
Why not just put your energy there?
As someone who uses both zfs (for file server storage) and btrfs (for the OS), my reason for using zfs is raidz. If btrfs implemented something similar, I'd drop zfs.
Re: (Score:2)
Raid support has been built into btrfs for ages now. It's been rock stable for me for over a year with an 8 disk array in raid 5 configuration. But the simplicity with which you can add and subtract drives, parity drives and others makes btrfs a total winner IMO.
https://btrfs.wiki.kernel.org/... [kernel.org]
Re:BTRFS is getting there (Score:5, Insightful)
It can certainly work when everything is working correctly. Have you tested its behaviour when things don't work correctly, for example by pulling the cable on one of the discs as it's running? Does it carry on running, does it transparently recover when you plug it back in? When I had a cable become unseated and the connection glitched, Btrfs happily toasted the data on the drive, and its mirror, and panicked the kernel whenever the discs were plugged in; I had to zero them out on another system before I could even try to reformat them. One of the major historical weak points has been that the failure codepaths were poorly tested, and this can come to bite you quite badly.
Re:BTRFS is getting there (Score:5, Interesting)
I recently "fixed" one of our ZFS fileservers at work which was performing very poorly by *removing* a failing drive. The drive was taking a few seconds to read blocks, obviously dying, so it was slowing down the entire system. As soon as I pulled it ZFS finally declared it dead and the filesystem was running at full performance again.
I felt so confident being able to just walk up and yank the troublesome drive; that's how much trust I've built in ZFS. It's incredibly stable and fault tolerant.
Re: (Score:3)
This is one of the things the Solaris-derived versions have tended to be better at handling - ZFS expects failing drives to be detected/managed by an external fault management service (fmd) which doesn't exist on other OS's. ZFS itself doesn't mark a drive as bad itself unless it outright disappears from the system.
Re: (Score:2)
This particular incident was on a much older kernel; this bug is now fixed.
But I've been testing Btrfs regularly since the start. Every single time I've retried testing with it, I have hit a different bug or design flaw. It's been a repeated pattern since the start. It's gotten better for sure, but it's never reached the point of being truly solid, or performant. I can't trust it with my data, which is of course its primary purpose.
The last time I used it in anger, I was doing repeated whole-archive reb
Re: (Score:2)
The commenter you are replying to plainly said RAIDZ. If you don't know how vastly superior RAIDZ is to RAID, you'd be better off not making your unwitting lack of knowledge so obvious.
Re: (Score:2)
I know fully well what raidz is and it's not significantly different than the raid in btrfs. Maybe you should understand what they are before you make a comment simply because one has a z on the end.
Re: (Score:3, Informative)
I don't know what your definition of "significant" is, but the BTRFS wiki [kernel.org] says "The one missing piece, from a reliability point of view, is that it is still vulnerable to the parity RAID 'write hole', where a partial write as a result of a power failure may result in inconsistent parity data." ZFS RAIDZ is expressly free from the write hole. That is very significant to me.
RAIDZ's write hole advantage is a product of three specifics: (1) RAID5 has n data disks plus one dedicated parity-only disk; ZFS distrib
Re:BTRFS is getting there (Score:4, Informative)
The features you list as "specific" to zfs exist in btrfs. btrfs can have dedicated parity drives or you can spread the data and parity across multiple drives in any order or pattern you would like.
The write hole in btrfs is AFIAK also present in zfs and listed as a risk of a power failure during write on a raid pool with COW filesystems. This risk is that loss of power during write can result in multiple different parity blocks for the same data and that in such an instance the filesystem cannot identify the correct data or parity (depending on the order you write them) and there are only a few solutions to this that involve resorting to a known good (older) copy and result in lost data (from the write).
IIRC this is a listed risk in the FAQ for ZFS. Just as the same write hole risk exists in btrfs. Also IIRC ZFS takes the path of writing parity before data such that it will lose new data rather than risk a corruption of existing parity blocks. Whereas, again IIRC btrfs COW's the new data then COW's the parity block which risks inconsistent parity but at less risk of data loss (as parity can be recomputed).
Two different solutions to the same problem that is intrinsic to COW filesystems with parity data. Neither is particularly better IMO as both run the risk of data loss in an extreme event. Though such events are rare.
Re: (Score:2)
My understanding is that the it does take advantage of COW. The problem is your parity has two copies, and you have two copies of the data that may or may not match the parity because it lost power during the write. This is why they call it a write hole because the algorithm can't be sure which copy of the written data is the right one because their are two copies of parity data as well. It's a tricky problem that's going to need either some pretty smart algorithms to sort out which copy is the right one or
Re: (Score:3)
Re: (Score:3)
There are people that argue you shouldn't use raid at all unless it's 10. Raid isn't a backup solution. It's a performance and reliability solution. If you need data backup you should be using real backups, not raid.
Re: (Score:2)
"Don't use RAID5. When one drive dies, there is a very good chance another drive will die, even if the that drive is a different model or brand."
True, but "very good chance" is still less than 100%.
I had a lot of systems with RAID5 and so, I lost filesystems due to a second drive (and even a third) dying before recovering from the previous one but also a majority that didn't so, all in all, RAID5 showed its value.
Maybe you are one of those that think RAID means "backup" instead of "higher MTBF".
Re: (Score:2)
Re:BTRFS is getting there (Score:5, Informative)
It's really quite simple. ZFS is a great filesystem. It's reliable, performant, featureful, and very well documented. Btrfs has a subset of the ZFS featureset, but fails on all the other counts. It has terrible documentation and it's one of the least reliable and least performant filesystems I've ever used. Having used both extensively over several years, and hammered both over long periods, I've suffered from repeated Btrfs dataloss and performance problems. ZFS on the other hand has worked well from day one, and I've yet to experience any problems. Neither are as fast as ext4 on single discs, but you're getting resilience and reliability, not raw speed, and it scales well as you add more discs; exactly what I want for storing my data. And having a filesystem which works on several operating systems has a lot of value. I took the discs comprising a ZFS zpool mirror from my Linux system and slotted them into a FreeBSD NAS. One command to import the pool (zpool import) and it was all going. Later on I added l2arc and zil (cache and log) SSDs to make it faster, both one command to add and also entirely trouble-free.
Over the years there have been lots of publicity about the Btrfs featureset and development. But as you said in your comment that it's "rapidly getting there". That's been the story since day one. And it's not got there. Not even close. Until its major bugs and unfortunate design flaws (getting unbalanced to unusability, silly link limits) are fixed, it will never get there. I had high hopes for Btrfs, and I was rewarded with severe dataloss or complete unusability each and every time I tried it over the years since it was started. Eventually I switched to ZFS out of a need for something that actually worked and could be relied upon. Maybe it will eventually become suitable for serious production use, but I lost hope of that a good while back.
Re:BTRFS is getting there (Score:4, Informative)
I mean the performance gains as you add more discs.
And regarding adding discs to an array, you certainly can. Just add addtional raid sets to the pool. That is, rather than adding discs to the existing array, you scale it up by adding additional arrays to the same pool. See the documentation. [oracle.com]
Re: (Score:2)
Wrong. You can't scale a pool by adding VDEVs to the existing pool, but you can expand without practical limit by generating VDEVs out of VDEVs and adding them. E.g., if you have 6 drives in a RAIDZ2, you can build a RAIDZ2 out of 6 (or some other number of) RAIDZ2s, including your original and 5 new ones. The downtime when you switch over to your expanded confi
Re: (Score:2)
Re: (Score:2)
--You can definitely add more disks if you are using mirrored drives in your pool, instead of RAIDZ. I created a Linux ZFS RAID0 (no redundancy) pool with 2 brand-new drives initially, then bought 2 more drives of the same brand and capacity a month later, and upgraded the pool in-place with no downtime to a zRAID10.
--If I want to expand the size of the pool, I can just add 2 more disks in a mirrored configuration.
# zpool add mirpool mirror ata-ST9500420AS_5VJDN5KL ata-ST9500420AS_5VJDN5KJ
--Note that this s
Re: (Score:3)
I don't why so many in the Linux community are so hooked on ZFS. BTRFS has a feature set that is rapidly getting there, its becoming more a more mature in terms of code that is already in the upstream.
Why not just put your energy there?
The fact is that 99% of the users couldn't care less about ZFS or BTRFS. Ext4 is just fine, and ext3 was also fine before, for 99% of the use cases. Hence, most people will just stick to their default FS.
Likewise, most Windows users are fine with NTFS, and wouldn't switch to ZFS even if it became available on Windows.
Re:BTRFS is getting there (Score:5, Interesting)
Because there's really no comparison between btrfs and ZFS. ZFS is years ahead in both stability and features. Only someone who's never used both would say that they are in any way close.
The only really useful thing that btrfs does that ZFS does not is rebalancing - that's a great feature and i'd love to see it in ZFS (but it will probably never get there).
ZFS has lots of features that btrfs doesn't have and likely never will.
Re: (Score:3)
Re: (Score:2)
Device removal on ZFS may be a thing [delphix.com], and may not require block pointer rewrite; it's the latter that is probably not going to happen.
Re: (Score:3)
Because it is good. In particular, it offers the only sensible way to make good use of the ephemeral storage offered by Amazon's Web Services (AWS) in a general case — the fast (SSD) storage can be used as read-cache for a ZFS stable of mount-points.
Why do put any energy into reinventing the wheel? And struggle with triangular "wheels" in the process?
Re: (Score:3, Interesting)
Not to mention, there ar
Re: (Score:2)
I think you already explained it in that first sentence... ZFS has been stable, reliable, and successfully managing huge amounts of data for the past decade (2005). BTRFS is still unstable, not remotely a suitable alternative for ZFS, with only the vague promise of maybe eventually "getting there".
Re: (Score:3)
5 years ago, it seemed that BTRFS was rapidly getting there, and its inclusion into the kernel made it feel like a rather sure bet!
(crickets)
5 years later, BTRFS is still "rapidly" getting there. I've tried it numerous times and had horrible data loss events literally every single time, and this as recently as a month ago.
Meanwhile, we're using ZFS on Linux in a complex production environment in a worst-case mixed read/write use case and it's been absolutely rock solid bullet proof, demonstrably more stable
CDDL and GPL don't mix (Score:4, Informative)
Regardless of what Ubuntu has convinced themselves of, in this context the ZFS filesystem driver would be an unlicensed derivative work. If they don't want it to be so, it needs to be in user-mode instead of loaded into the kernel address space and using unexported APIs of the kernel.
A lot of people try to deceive themselves (and you) that they can do silly things, like putting an API between software under two licenses, and that such an API becomes a "computer condom" that protects you from the GPL. This rationale was never true and was overturned by the court in the appeal of Oracle v. Google.
Re:ZFS is nice... (Score:4, Insightful)
Why would you need nvidia drivers on a file server? Use Ubuntu Server, it's made for, well, being a server.
Re: (Score:3, Funny)
My file server has a very low-end nVidia graphics card in it. There was some sort of issue with the stock drivers that shipped with the distro, such that I got no video output at all, and I don't have any GUI installed, just text-mode console. I had to install the nVidia drivers to get it working.
Re: (Score:2)
SSH is my primary interface to the server, but sometimes you've got to get on a box locally, like if you mess up something network related, or you mess up a change to grub, or who knows what. It's not common, but I don't have a serial terminal, so having video output when needed is very important.
Re: (Score:2)
What kind of server doesn't have IPMI?
Re: (Score:2)
The home kind?
Re: (Score:2)
I bought a Lenovo TS440 on Amazon for $400. Included a SAS controller, E3-1245 CPU, hot-swapable PSU, motherboard, and hot swap HDD bays in the case...They show up now and then, check slickdeals.
Re: (Score:2)
Congrats. That's still more money than my server was, even if you take into account the VGA cable I had to buy because of lack of IPMI and every other monitor in the house having a HDMI cable.
In 3 years I've had to access the system from a local console precisely once, when a typo in a script brought down the primary network interface. IPMI has almost zero use case for me and attaching a monitor and keyboard to the server is something that takes the best part of 30 seconds, so it doesn't even warrant a cons
Re:ZFS is nice... (Score:4, Insightful)
A typical home Linux server - AKA an old PC - won't have IPMI. Actual servers typically will have IPMI, but they cost $BIG_BUCKS$. And even then, IPMI is extremely limited.
On the Dell servers I bought a few months ago I can't do anything useful with it beyond power on/off or text-only console redirection over serial (over LAN) before the OS loads (I can get into BIOS and the RAID controller ROM, not much else).
Unless of course I pony up more cash for their iDRAC Standard/Pro/Enterprise/etc. shit. THEN I can get graphical console redirection, some storage space to flash firmware from, and even USB/optical drive redirection.
Re: ZFS is nice... (Score:2)
Supermicro X9 server boards come with IPMI and are not particularly expensive.
Re: (Score:2)
A typical home Linux server - AKA an old PC - won't have IPMI. Actual servers typically will have IPMI, but they cost $BIG_BUCKS$. And even then, IPMI is extremely limited.
On the Dell servers I bought a few months ago I can't do anything useful with it beyond power on/off or text-only console redirection over serial (over LAN) before the OS loads (I can get into BIOS and the RAID controller ROM, not much else).
Unless of course I pony up more cash for their iDRAC Standard/Pro/Enterprise/etc. shit. THEN I can get graphical console redirection, some storage space to flash firmware from, and even USB/optical drive redirection.
And IPMI console typically requires java. Within a year or so NO browser will support that!
Re: (Score:2)
No, you can use serial-over-LAN via native utilities like ipmitool. You're talking about the idiot-friendly web interface a few OEMs happen to include. Most ipmi implementations don't even have any web/browser interface to begin with.
Re: (Score:2)
No, you can use serial-over-LAN via native utilities like ipmitool. You're talking about the idiot-friendly web interface a few OEMs happen to include. Most ipmi implementations don't even have any web/browser interface to begin with.
When you are accessing a server on the other side of the world with a dedicated server hosting provider, serial takes a bit more set-up for them and they just don't do it. I've certainly never encountered a single one, and I deal with hundreds of such hosting providers.
Many sysadmins in the real world are very sadly stuck with the web interface and have no other option.
Re: (Score:2)
And IPMI console typically requires java. Within a year or so NO browser will support that!
No, actually the typical IPMI console is AMT these days, and you can connect to it with an ordinary VNC client, which isn't going away any time soon.
Re: (Score:2)
I don't see how anyone can claim "IPMI is extremely limited" with a straight face. It does nearly everything you could want in an OoBM interface, except (usually) a GUI. You can do lights-out management, powering systems off and on, setting BIOS/UEFI options like boot device statelessly (not just at boot-up), it can be configured to have a dedicated NIC port, or shared with the OS whether you're bonding NICs or not, gives you a serial console (including BIOS access
Re: (Score:2)
With modern IPMI you can do more than that too, such as booting to an ISO image from all the way across the internet. You can even do full GUI and everything with a simple VNC client. Just so long as the machine powers on and has an internet connection configured, it'll work.
Intel's AMT boards do all of this anyways, and they're quite common these days.
Re: (Score:2)
The majority people build themselves out of cheap consumer parts.
Re: (Score:3, Insightful)
So that's three-strikes... You're 1) using a regular PC as a server (no IPMI), 2) that PC doesn't even have a serial port to be used as an OoBM console, and finally 3) you've got some issue with the video card not even displaying text-mode. With all three strikes against your server, I just can't muster any sympathy for the predicament you put yourself in, relying on an unsuitable cheap piece of crap equipment.
In fact it's
Re: (Score:3)
Re: (Score:2)
But a serial console certainly did help, right?
Re: (Score:3)
Wow, with all the hostile responses your post has been getting, I almost started thinking that I had joined the LKML by mistake.
Re: (Score:3)
When I tested wireless clients at Cisco, I installed the GUI with Fedora or Mint because I needed to run YouTube video in a loop. The division chief wanted to fire me for using 75% of the wireless bandwidth for YouTube. He didn't realize that I had 30 laptops running YouTube video and supporting 300 users without a hiccup in network performance. All the YouTube videos were from the Cisco channel, which included several interview with him. Nothing like seeing your face on 30 screens.
Re: (Score:3)
What was the Nvidia video driver doing on a server?
Re: (Score:2)
What was any kind of an X server doing on a server?
Re: (Score:2)
Ubuntu saw the built-in Nvidia video card on the desktop motherboard I was using at the time and installed the Nvidia drivers. Initial setup was fine. The automatic upgrades usually screwed things up.
FreeNAS has the VGA-only driver for video output and works fine with the built-in AMD video card on the desktop motherboard that I'm currently using..
Re: (Score:2)
Ubuntu does install the shitty open source driver for nvidia, noveau, if it detects nvidia card
Re: (Score:2)
Some of us do use CUDA or OpenCL in our servers. Not that Nouveau is much use for that, but it is the default and you gotta boot Ubuntu up with the defaults at least once before you can configure it properly.
Re: ZFS is nice... (Score:2)
ZFS is not really the supported setup for Ubuntu. I've only has issues with the proprietary nvidia driver. I've always been able to fix those issues.
When ZFS and nouveau are supports by default then that configuration will be tested and ideally more robust. I wouldn't worry.
Re:ZFS is nice... (Score:4, Interesting)
I run ZFS on any / every machine I can, server or not. That is one filesystem where the features outweigh all possible concerns.
Re: (Score:2)
Re: (Score:2, Informative)
To name a few: A variety of flavors of built-in RAID / replication. Built in error detection and correction. Snapshots. The ability to send and receive deltas between snapshots from one server to another.
Re:ZFS is nice... (Score:4, Insightful)
No. RAID isn't better handled at other layers. If you don't know about the filesystem semantics then you need NVRAM or journalling at the block level to avoid the RAID-5 write hole. RAID-Z doesn't have this problem. If you're recovering a failed block-level RAID, then you need to copy all of the data, including unused space. With ZFS RAID (all levels), you only copy the used data. There are numerous other advantages to rearranging the layers, including being a lot more flexible in the provisioning.
It's also a mistake to think of ZFS as a layer. ZFS has three layers: the lowest handles physical disks and presents a linear address space, the middle presents a transactional object store, and the top presents something that looks like a filesystem (or a block device, which is useful for things like VM disk images).
Re:ZFS is nice... (Score:5, Funny)
Re: (Score:3)
One GREAT advantage it has over your bog-standards filesystems like NTFS and ext4 is its copy-on-write architecture, and the essentially free and near-instant snapshot system it provides.
When you take a snapshot of a filesystem, it simply makes a copy of the superblock. All of the space on the devices remain marked as in-use, and both snapshots share exactly the same physical storage.
When you make a change to one of the snapshots, it simply writes the changed blocks to a different location on the underlying
Re: (Score:3)
Re: (Score:2)
FreeNAS only needs 8GB of RAM because of the OS is tunes, not because of ZFS. FreeNAS runs entirely in RAM.
The FreeBSD manual and the Solaris manual both state 1GB of system RAM to use ZFS. I'm running a 40TB ZFS pool with 16GB of RAM and performance is excellent.
Re: (Score:3)
As someone who has 50TB on a system with 16GB ram i agree with you.
I wish people would stop spreading this "1gb ram/tb" FUD.
It is the recommendation for DEDUP, not for standard ZFS.
Re: (Score:2, Interesting)
So how are they doing this without license conflict? Are they doing a clean-room implementation of ZFS?
Re: (Score:3)
This is what I wonder as well.
What's frustrating is that it's not the ZFS license that's the problem. It's the GPL. Oracle couldn't give a flying fuck if someone put ZFS into the Linux kernel, but the GPL zealots would probably raise a huge stink about it and keep it from happening.
For the record, I support open source; I just don't like the "viral" nature of the GPL. The ZFS situation is a case where it's doing more harm than good.
Re: (Score:3)
Sorry but that's simply not true. It was Sun and now Oracle that purposely chose an incompatible license for ZFS. Nothing to do with the GPL here. Your complaints are like the people that buy up land around an airport, build houses, and then complain about the noise.
Anyway, if you read the fine articles you'd discover that what Ubuntu is going to do is include ZoL modules in their kernel packages. This takes advantage of GPLv2's aggregation clause which lets you ship non GPL binaries with GPL'd binaries
Re: (Score:2)
>athough ZoL is not that hard to get running at all.
It's easy to get running but hard to KEEP running, because DKMS has a bad habit of breaking sometimes when updating the kernel or ZFS itself.
I'd say about a 50/50 chance of having the system come up correctly after a "yum update" for the kernel or ZFS on RHEL 6.
Being able to just install binary modules would probably help considerably, provided they are built correctly by the distro maintainers.
Re: (Score:2)
DKMS is not the only way to install ZoL though. It can be built and install perfectly fine without DKMS and I do this for some of my machines.
That being said I have been using DKMS on a Debian box for 3 years now and have gone through many, many ZoL upgrades and many kernel upgrades and have never had any issue with the upgrading not going smoothly. Sounds more like a problem with the ZoL maintainer for RHEL.
Re: (Score:2)
Because there isn't a conflict if done right.
People that claim there is a conflict generally don't understated how the licenses actually work and what they allow and don't allow.
Tthere is no legal issue preventing the sources from being combined because neither the CDDL nor the GPL place restrictions on aggregations of source code, which is what putting ZFS into the same tree as Linux would be. Binary modules built from such a tree could be distributed with the kernel's GPL modules under what the GPL consid
Re: (Score:2)
Aggregate means two programs that are not combined and just live on the same filesystem. In the case of a filesystem driver, it's read into the kernel space and touches unexported APIs of the kernel and various kernel internals.
It is thus a derivative work.
Re: (Score:2)
Server with automatic upgrades and video drivers?
No backups, either? You had to reinstall the OS?
Re: (Score:2)
When I was using Ubuntu for file server, I was using software RAID and ReiserFS.
https://en.wikipedia.org/wiki/ReiserFS [wikipedia.org]
Re: (Score:2)
Re: (Score:2)
Maybe. Red Hat has a very different legal outlook on things then Canonical does.
Re: (Score:2)
Android finally gets EXT4 support in Marshmallow to provide real and wonderful dupport for SDCards, and suddenly Ubuntu goes ZFS. There may be many advantages with ZFS. Matching that of the worlds largest OS doesn't hurt
And when Andriod gets ZFS, we'll be ready for when those 256 zebibyte SD cards come out.
Re: (Score:2)
Actually there was a problem with zfs and systemd on ubuntu; namely you couldn't have the ZFS stack automatically start and mount the filesystem at bootup; it just wasn't possible until only very recently.
Re: (Score:2)
How's this for an appliance:
- Have a server with an ssd and 4 disks
- Install VMware ESXi bare metal
- Create a filer VM that you install ubuntu with zfs on to, and use VT-d to pass the disks directly to that VM
- Have that VM share the ZFS volume as an NFS share that is only open to one IP
- Create another VM that mounts that NFS share and subsequently offers these services to the rest of the network: Samba, Plex, Couchpotato, Sickrage, Rutorrent/rtorrent
At least, this is what my server looks like anyways. Tot
Re: (Score:2)
Pretty impressive system you got there?? Care to share the Cost to feed that setup on daily basis??
Hmm...25 cents I think? It's a Lenovo TS440 that I bought on Amazon.
Re: (Score:2)
Oh no there are no datastores on it. I did try that route, using a software iSCSI scheme, and it doesn't work terribly well because it can't recover on its own if you reboot it.
Instead the VM itself just boostraps off of the datastore that hosts the ESXi install. It would be no loss if it were to fail, as unlike most people who build these things, I have a complete build doc ready to go so that I can have a fresh instance of it back up and running in 30 minutes. Since the bulk of the data is stored on the Z