Linus: Don't Use ZFS

6 years ago (realworldtech.com)

Here's his reasoning:

"honestly, there is no way I can merge any of the ZFS efforts until I get an official letter from Oracle that is signed by their main legal counsel or preferably by Larry Ellison himself that says that yes, it's ok to do so and treat the end result as GPL'd.

Other people think it can be ok to merge ZFS code into the kernel and that the module interface makes it ok, and that's their decision. But considering Oracle's litigious nature, and the questions over licensing, there's no way I can feel safe in ever doing so.

And I'm not at all interested in some "ZFS shim layer" thing either that some people seem to think would isolate the two projects. That adds no value to our side, and given Oracle's interface copyright suits (see Java), I don't think it's any real licensing win either."

  • Btrfs crashed for me on two occations, last time, around 2 years back I have installed zfs (which I am using for ~10 years on FreeBSD server) which works like a charm since then.

    I understand Linus reasoning but there is just no way I will install btrfs, like ever. I rather dont update kernel (I am having zfs on fedora root with degular kernel updates and scripts which verify that everything is with kernel modules prior to reboot) than use file system that crashed twice in two years.

    Yes it is very annoying if update crashes fs, but currently:

    - in 2 years two time btrfs crashed itself

    - in next 2 years update never broke zfs

    As far as I am concerned, the case for zfs is clear.

    This might be helpful to someone: https://www.csparks.com/BootFedoraZFS/index.md

    Anyway Linus is going too far with his GPL agenda, the MODUL_LICENSE writting kernel moduls explains why the hardware is less supported on linux - instead of devs. focusing on more support from 3rd party companies, they try to force them to do GPL. Once you set MODUL_LICENSE to non GPL, you quickly figure out that you can't use most of kernel calls. Not the code. Calls.

    • The Linux kernel has been released under GPL2 license since day 1, and I don't think that's ever going to change. Linus is more pragmatic than many of his detractors think - he thankfully refused to migrate to GPL3 because the stricter clauses would have scared away a lot of for-profit users and contributors.

      Relaxing on anything more permissive than GPL2 would instead mean the end of Linux as we know it. A more permissive license means that nothing would prevent Google or Microsoft from releasing their own closed-source Linux, or replacing the source code of most of the modules with hex bloats.

      I believe that GPL2 is a good trade-off for a project like Linux, and it's good that we don't compromise on anything less than that.

      Even though I agree on the superiority of ZFS for many applications, I think that the blame for the missed inclusion in the kernel is on Oracle's side. The lesson learned from NTFS should be that if a filesystem is good and people want to use it, then you should make sure that the drivers for that filesystem are as widely available as possible. If you don't do it, then someone sooner or later will reverse engineer the filesystem anyway. The success of a filesystem is measured by the number of servers that use it, not by the amount of money that you can make out of it. For once Oracle should act more like a tech company and less like a legal firm specialised in patent exploitation.

      2 replies →

    • I agree with the stand btrfs, around same time (2 years back), it crashed on me while I was trying to use it for external hard disk attached to raspberry pi. nothing fancy. since then, I cant tolerate the fs crashes, for a user, its supposed to be one of the most reliable layers.

      1 reply →

    • Btrfs like OCFS is pretty much junk. You can do everything you need to on local disk with XFS and if you need clever features buy a NetApp.

    • Both ZFS and BTRFS are essentially Oracle now. BTRFS was an effort largely from Oracle to copy SUN's ZFS advantages in a crappy way which became moot once their acquired SUN. ZFS also requires (a lot of) ECC memory for reliable operation. It's a great tech, pity it's dying slow death.

      6 replies →

  • That's his reasoning for not merging ZFS code, not for generally avoiding ZFS.

    • Here are his reasons for generally avoiding ZFS from what I consider most important to least.

      - The kernel team may break it at any time, and won't care if they do.

      - It doesn't seem to be well-maintained.

      - Performance is not that great compared to the alternatives.

      - Using it opens you up to the threat of lawsuits from Oracle. Given history, this is a real threat. (This is one that should be high for Linus but not for me - there is no conceivable reason that Oracle would want to threaten me with a lawsuit.)

      154 replies →

    • The problem with ZFS is that it isn't part of Linux kernel.

      Linux project maintains compatibility with userspace software but it does not maintain compatibility with 3rd party modules and for a good reason.

      Since modules have access to any internal kernel API it is not possible to change anything within kernel without considering 3rd party code, if you want to keep that code working.

      For this reason the decision was made that if you want your module to work you need to make it part of Linux kernel and then if anybody refactors anything they need to consider modules they would be affecting by the change.

      Not allowing the module to be part of the kernel is a disservice to your user base. While there are modules like that that are maintained moderately successfully (Nvidia, vmware, etc.) this is all at the cost of the user and userspace maintainers who have to deal with it.

      11 replies →

    • And he was doing fine up to that point. For IMO good reasons, ZFS will likely never be merged into Linux. And filesystem kernel modules from third parties have a pretty long history of breakage issues going back to some older Unixes.

      That's going to be plenty of reason not to use ZFS for most people. The licensing by itself is also certainly a showstopper for many.

      But I'm not sure his other comments are really fair and, had Oracle relicensed ZFS n years back, ZFS would almost certainly be shipping with Linux, whether or not as the typical default I can't say. It certainly wasn't just a buzzword and there were a number of interesting aspects to its approach.

    • Well, he says

      > It was always more of a buzzword than anything else, I feel, and the licensing issues just make it a non-starter for me.

      So presumably the licensing problem mentioned by your parent's comment is weighing heavily here. I think this "don't use ZFS" statement is most accurately targeted at distro maintainers. Anyone not actually redistributing Linux and ZFS in a way that would (maybe) violate the GPL is not at any risk. That means even large enterprises can get away with using ZoL.

    • It's exactly that, when combined with the longstanding practice of maintaining compatibility with userspace, but reserving the right to refactor kernel-space code whenever and wherever needed. If ZFS-on-linux breaks in a subtle or obvious way due to a change in linux, he can't afford to care about that - keeping the linux kernel codebase sane while adding new features, supported hardware, optimizations, and fixes at an honestly scary rate, is not that easy.

      See also https://www.kernel.org/doc/html/latest/process/stable-api-no...

      (fuse is a stable user-space API if you want one ... it won't have the same performance and capabilities of course ...)

      2 replies →

    • "Don't use ZFS. It's that simple. It was always more of a buzzword than anything else, I feel, and the licensing issues just make it a non-starter for me."

      When he says that, I think on the $500 million Sun spent on advertising java.

      4 replies →

    • Well he had this:

      > as far as I can tell, it has no real maintenance behind it either any more

      Which simply isn't true. They just released a new ZFS version with encryption built in (no more ZFS + LUKS) and they removed the SPL dependency (which didn't support Linux 5.0+ anyway).

      I use ZFS on my Linux machines for my storage and I've been rather happy with it.

      11 replies →

    • Relevant bits:

      "Don't use ZFS. It's that simple. It was always more of a buzzword than anything else, I feel, and the licensing issues just make it a non-starter for me.

      The benchmarks I've seen do not make ZFS look all that great. And as far as I can tell, it has no real maintenance behind it either any more, so from a long-term stability standpoint, why would you ever want to use it in the first place?"

      41 replies →

  • > And I'm not at all interested in some "ZFS shim layer" thing either

    If there is no "approved" method for creating Linux drivers under licenses other than the GPL, that seems like a major problem that Linux should be working to address.

    Expecting all Linux drivers to be GPL-licensed is unrealistic and just leads to crappy user experiences. nVidia is never going to release full-featured GPL'd drivers, and even corporative vendors sometimes have NDAs which preclude releasing open source drivers.

    Linux is able to run proprietary userspace software. Even most open source zealots agree that this is necessary. Why are all drivers expected to use the GPL?

    ---

    P.S. Never mind the fact that ZFS is open source, just not GPL compatible.

    P.P.S. There's a lot of technical underpinnings here that I'll readily admit I don't understand. If I speak out of ignorance, please feel free to correct me.

    • I am also not an expert in this space - but if I understand correctly the reason the linux Nvidia driver sucks so much is that it is not GPL'd (or open source at all).

      There is little incentive for Nvidia to maintain a linux specific driver, but because it is closed source the community cannot improve/fix it.

      > Why are all drivers expected to use the GPL?

      I think the answer to this is: drivers are expect to use the GPL if they want to be mainlined and maintained - as Linus said: other than that you are "on your own".

      24 replies →

    • > Expecting all Linux drivers to be GPL-licensed is unrealistic and just leads to crappy user experiences. nVidia is never going to release full-featured GPL'd drivers, and even corporative vendors sometimes have NDAs which preclude releasing open source drivers.

      Nvidia is pretty much the only remaining holdout here on the hardware driver front. I don't see why they should get special treatment when the 100%-GPL model works for everyone else.

    • ZFS is not really GPL-incompatible either, but it doesn't matter. Between FUD and Oracle's litigiousness, the end result is that there is no way to overcome the impression that it is GPL-incompatible.

      But it is a problem that you can't reliably have out-of-tree modules.

      Also, Linus is wrong: there's no reason that the ZoL project can't keep the ZFS module in working order, with some lag relative to updates to the Linux mainline, so as long as you stay on supported kernels and the ZoL project remains alive, then of course you can use ZFS. And you should use ZFS because it's awesome.

      4 replies →

    • There's a unique variable here and that's Oracle.

      That shouldn't actually matter; it should just depend on the license. But millions in legal fees says otherwise.

      1 reply →

    • >If there is no "approved" method for creating Linux drivers under licenses other than the GPL, that seems like a major problem that Linux should be working to address.

      As a Linux user and an ex android user, I absolutely disagree and would add that the GPL requirement for drivers is probably the biggest feature Linux has!

      1 reply →

    • There is a big difference between a company distributing a proprietary Linux driver, and the linux project merging software of a gpl incompatible license. In the first case it is the linux developers who can raise the issue of copyright infringement, and it is the company that has to defend their right to distribute. In the later the roles are reversed with the linux developers who has to argue that they are within compliance of the copyright license.

      A shim layer is a poor legal bet. It assumes that a judge who might not have much technical knowledge will agree that by putting this little technical trickery between the two incompatible works then somehow that turn it from being a single combined work into two cleanly separated works. It could work, but it could also very easily be seen as meaningless obfuscation.

      > Why are all drivers expected to use the GPL

      Because a driver is tightly depended on the kernel. It is this relationship that distinguish two works from a single work. A easy way to see this is how a music video work. If a create a file with a video part and a audio part, and distribute it, legally this will be seen as me distributing a single work. I also need to have additional copyright permission in order to create such derivative work, rights that goes beyond just distributing the different parts. If I would argue in court that I just am distributing two different works then the relationship between the video and the music would be put into question.

      A userspace software is generally seen as independent work. One reason is that such software can run on multiple platforms, but the primary reason is that people simply don't see them as an extension of the kernel.

    • There is an "approved" method - write an publish your own kernel module. However if your module is not GPL licensed it cannot be published in the linux kernel itself, and you must keep up with the maintenance of the code. This is a relatively fair requirement imo.

      2 replies →

    • > If there is no "approved" method for creating Linux drivers under licenses other than the GPL, that seems like a major problem that Linux should be working to address.

      It's a feature, not a bug. Linux is intentionally hostile to binary-blob drivers. Torvalds described his decision to go with the GPLv2 licence as the best thing I ever did. [0]

      This licensing decision sets Linux apart from BSD, and is probably the reason Linux has taken over the world. It's not that Linux is technically superior to FreeBSD or OpenSolaris.

      > Expecting all Linux drivers to be GPL-licensed is unrealistic and just leads to crappy user experiences

      'Unrealistic'? Again, Linux took over the world!

      As for nVidia's proprietary graphics drivers, they're an unusual case. To quote Linus: I personally believe that some modules may be considered to not be derived works simply because they weren't designed for Linux and don't depend on any special Linux behaviour [1]

      > Why are all drivers expected to use the GPL?

      Because of the 'derived works' concept.

      The GPL wasn't intended to overreach to the point that a GPL web server would require that only GPL-compatible web browsers could connect to it, but it was intended to block the creation of a non-free fork of a GPL codebase. There are edge-cases, as there are with everything, such as the nVidia driver situation I mentioned above.

      [0] https://en.wikipedia.org/w/index.php?title=History_of_Linux&...

      [1] https://en.wikipedia.org/w/index.php?title=Linux_kernel&oldi...

    • > If there is no "approved" method for creating Linux drivers under licenses other than the GPL, that seems like a major problem that Linux should be working to address.

      The problem is already addressed: if someone wants to contribute code to the project then it's licensing must be compatible with the prior work contributed to project. That's it.

      19 replies →

    • > If there is no "approved" method for creating Linux drivers under licenses other than the GPL, that seems like a major problem that Linux should be working to address.

      It's less a think Linux can work on then a think lawmakers/courts would have to make binding decisions on, which would make it clear if this usage is Ok or not. But in practice this can only be decided on a case-by-case basis.

      The only way Linux could work on this is by:

      1. Adding a exception to there GPL license to exclude kernel modules from GPL constraints (which obviously won't happen for a bunch of reasons).

      2. Turn Linux into a micro kernel with user-land drivers and interfaces for that drivers which are not license encumbered (which again won't happen because this would be a completely different system)

      3. Oracle re-licensing ZFS under a permissible Open Source license (e.g. dual license it, doesn't need to be GPL, just GPL compatible e.g. Apache v2). Guess, what that won't happen either, or at last I would be very surprised. I mean Oracle is running out of products people _want_ to buy from them and increasingly run into an area where they (ab-)use the license/copyright/patent system to earn their monny and force people to buy there products (or at last somehow pay license fees to them).

    • >[...] that seems like a major problem that Linux should be working to address [...] Why are all drivers expected to use the GPL?

      Vendors are expected to merge their drivers in mainline because that is the path to getting a well-supported and well-tested driver. Drivers that get merged are expected to use a GPL2-compatible license because that is the license of the Linux kernel. If you're wondering why the kernel community does not care about supporting an API for use in closed-source drivers, it's because it's fundamentally incompatible with the way kernel development actually works, and the resulting experience is even more crappy anyway. Variations of this question get asked so often that there are multiple pages of documentation about it [0] [1].

      The tl;dr is that closed-source drivers get pinned to the kernel version they're built for and lag behind. When the vendor decides to stop supporting the hardware, the drivers stop being built for new kernel versions and you can basically never upgrade your kernel after that. In practice it means you are forced to use that vendor's distro if you want things to work properly.

      >[...] nVidia is never going to release full-featured GPL'd drivers.

      All that says to me is that if you want your hardware to be future-proof, never buy nvidia. All the other Linux vendors have figured out that it's nonsensical to sell someone a piece of hardware that can't be operated without secret bits of code. If you ever wondered why Linus was flipping nvidia the bird in that video that was going around a few years ago... well now you know.

      [0]: https://www.kernel.org/doc/html/latest/process/kernel-driver...

      [1]: https://www.kernel.org/doc/html/latest/process/stable-api-no...

    • > Linux is able to run proprietary userspace software. Even most open source zealots agree that this is necessary. Why are all drivers expected to use the GPL?

      To answer your excellent question (and ignore the somewhat unfortunate slam on people who seem to differ with your way of thinking), it is an intentional goal of software freedom. The idea of a free software license is to allow people to obtain a license to the software if they agree not to distribute changes to that software in such a way so that downstream users have less options than they would with the original software.

      Some people are at odds with the options available with licenses like the GPL. Some think they are too restrictive. Some think they are too permissive. Some think they are just right. With respect to you question, it's neither here nor there if the GPL is hitting a sweet spot or not. What's important is that the original author has decided that it did and has chosen the license. I don't imagine that you intend to argue that a person should not be able to choose the license that is best for them, so I'll just leave it at that.

      The root of the question is "What determines a change to the software". Is it if we modify the original code? What if we add code? What if we add a completely new file to the code? What if we add a completely new library and simply link it to the code? What if we interact with a module system at runtime and link to the code that way?

      The answers to these questions are not well defined. Some of them have been tested in court, while others have not. There are many opinions on which of these constitutes changing of the original software. These opinions vary wildly, but we won't get a definitive answer until the issues are brought up in court.

      Before that time period, as a third party who wishes to interact with the software, you have a few choices. You can simply take your chances and do whatever you want. You might be sued by someone who has standing to sue. You might win the case even if you are sued. It's a risk. In some cases the risk is higher than others (probably roughly ordered in the way I ordered the questions).

      Another possibility is that you can follow the intent of the original author. You can ask them, "How do you define changing of the software". You may agree with their ideas or not, but it is a completely valid course of action to choose to follow their intent regardless of your opinion.

      Your question is: why are all drivers expected to use the GPL? The answer is because drivers are considered by the author to be an extension of the software and hence to be covered by the same license. You are absolutely free to disagree, but it will not change the original author's opinion. You are also able to decide not to abide by the author's opinion. This may open you up to the risk of being sued. Or it may not.

      Now, the question unasked is probably the more interesting question. Why does Linus want the drivers to be considered an extension of the original software? I think the answer is that he sees more advantages in the way people interact in that system than disadvantages. There are certainly disadvantages and things that we currently can't use, but for many people this is not a massive hardship. I think the question you might want to put to him is, what advantages have you realised over the years from maintaining the license boundaries as they are? I don't actually know the answer to this question, but would be very interested to hear Linus's opinion.

      3 replies →

  • This is nonsense. The problem is not getting ZFS bundled with Linux like he is implying here. The problem is how Linux artificially restricts what APIs your module is able to access based on the license, so you wouldn't be able to use ZFS even by your own prerogative like he is suggesting.

    He is claiming that it comes down to the user's choice, which would be just fine if that were true. The only problem here is that Linux has purposely taken steps to hinder that choice.

I'll give up Linux on my servers before I give up ZFS

especially so given the recent petulant attitude that broke API compatibility in the LTS branch just to spite the ZFS developers: https://news.ycombinator.com/item?id=20186458

compete honestly on technical merit, rather than pulling dirty tricks that you'd expect of Oracle or 1990's MS

  • Pretty much my view as well. If Linux becomes incompatible with ZFS in any way, I'll switch to FreeBSD.

    That said, after the Oracle Java debacle, I can see why Linus would not be receptive towards merging ZFS into the kernel. I just wish he argued the point on legal issues alone instead of making up stories about non-existent technical flaws in ZFS. The whole thing is basically a work of art. Oracle should consider GPL-ing it and integrating it into Linux directly.

    • > Oracle should consider GPL-ing it and integrating it into Linux directly

      I think Apache (or BSD/MIT) would be far more palatable, as GPL'ing it would cut off the BSDs as well as OpenSolaris, which would certainly be a bummer.

      1 reply →

  • Yeah...I use FreeBSD for file servers because I don't have to even pay attention this constant ZFSonLinux drama. I treat them like almost like appliances. Linux servers are more than happy to use them on the back-end.

  • The core development team hasn't really been fully trustworthy since they spent years pretending their cpu scheduler wasn't hot garbage for desktop usage, denied the need for a pluggable scheduler to allow multiple schedulers to be selected from, then seemingly an age later implemented something in the same vein as CK while giving zero credit.

As a heavy user of ZFS and Linux, what else is there that even comes close to what ZFS offers?

I want cheap and reliable snapshots, export & import of file systems like ZFS datasets, simple compression, caching facilities(like SLOG and ARC) and decent performance.

  • Bcachefs is probably the only thing that will get there. The codebase is clean and we'll mantained, built from solid technology (bcache) and will include most of the ZFS niceties. I just wish more companies would sponsor de project and stop wasting money on BTRFS

    • Yes, I’m eagerly waiting for Bcachefs to get there at some point, but it is several years away (rightly so, because it is hard and the developer is doing an amazing job) if my understanding of its state is correct.

      I have heard of durability issues with btrfs, and do not want to touch it if it fails with its primary job.

    • Which is why ZFS is still a thing today - there are no other alternatives. Everything is coming "soon" while ZFS is actually here and clocking up a stability track-record.

    • >Bcachefs is probably the only thing that will get there.

      Or Bcachefs is probably the only thing that might get there.

      The amount of engineering hours went into ZFS is insane. It is easy to get a project that has 80% similarity on the surface, but then you spend the same amount of time from 0 - 80% on the last 20% and edge cases. ZFS has been battle tested by many. Rsync is on ZFS.

      The amount of Petabyte stored in ZFS safely over the years gives peace of mind.

      Speaking of Rsync, normally a topic of ZFS on HN will have him resurface. Hasn't seen any reply from him yet.

    • I’m looking forward to bcachefs becoming feature complete and upstreamed. We finally have a good chance of having a modern and reliable FS in the Linux kernel. My wish list includes snapshots and per volume encryption.

    • What if the main purpose of BTRFS is to have something "good enough" so no one starts working on a project that can compete with large commercial storage offerings?

      Does anyone remember the parity patches they rejected in 2014?

      > Your work is very very good, it just doesn’t fit our business case.

      I haven't followed it much. Does it have anything more than mirroring (that's stable) these days?

    • >stop wasting money on BTRFS

      You're saying they should stop supporting a project that was considered stable by the time the other started being developed. Why do that? What makes Bcachefs a better choice?

      12 replies →

  • I know this isn't an option for everyone, but this is part of why I run FreeBSD instead of Linux for servers where I need ZFS.

  • I agree that ZFS has a lot to offer. But the legal difficulties in merging ZFS support into the mainline kernal are understandable. It's a shame but I think he is making the right call.

    • Merging into the mainline kernel is not what the person he is replying to was even asking for. All they were asking is for Linux to stop putting APIs behind DRM that prevents non-GPL modules like ZFS from using them. That doesn't mean ZFS must be bundled with Linux.

      I think everyone is in agreement that ZFS can't be included in the mainline kernel. The question is just if users should be able to install and use it themselves or not.

      1 reply →

    • Kernal? If you can merge zfs support into 8KB kernal then you are not a mere mortal, so no need to worry about any legal difficulties.

  • XFS on LVM thin pool LV should give you a very robust fs, cheap CoW snapshots, multi device support. If you want, you can make the thin pool be on RAID via LVM RAID under the thin pool.

    For import export, IIRC XFS has support for it and you can dump/import LV snapshots to get atomicity.

    For caching there is LVM cache, should be again possible to combine with thinpool & RAID. Or you can use it separately for normal LV.

    All this is functionality tested by years of production use.

    For compression/deduplication, that is AFAIK work in progress upstream based on the open sourced VDO code.

    • Interesting combination of tools I have used independently but never as a replacement of my beloved ZFS.

      Never made snapshots with LVM. Always used LVM as a way to carve up logical storage from a pool of physical devices but nothing more. I need to RTFM on how snapshotting would work there - could I restore just a few files from an hour ago while letting everything else be as they are?

      With ZFS, I use RAM as read chace(ARC) and an Optane disk as sync write cache(SLOG). I wonder if LVM cache would let me do such a thing. Again, a pointer for more manual reading for me.

      Compression is a nice to have for me at this moment. Good to know that it is being worked on at the LVM layer.

      1 reply →

    • Call me when somebody like a major cloud provider has used this system to drive millions of hard-drives. I'm not gone patch my data security together like that.

      There is difference between 'all these tools have been used in production' and 'this is an integrated tool that has been used for 15+ years in the biggest storage installations in the world'.

      1 reply →

  • Honestly asking, how Btrfs compares to ZFS?

    There's also Lustre but it's a different beast altogether for a different scenario.

    • On the surface, btrfs is pretty close to zfs.

      Once you actually use them, you discover all the ways that btrfs is a pain and zfs is a (minor) joy:

      - snapshot management

      - online scrub

      - data integrity

      - disk management

      I lost data from perfectly healthy-appearing btrfs systems twice. I've never lost data on maintained zfs systems, and I now trust a lot more data to zfs than I ever have to btrfs.

      11 replies →

    • Btrfs has eat my data, and once that happens I will never, ever, ever, literally ever go back to that system. Its unacceptable to me that a system eats data specially after multiple rounds of 'its stable now'.

      But in the end it always turns out that only if you 'use' it correctly it is actually not gone eat your data.

      I used ZFS for far longer and had far fewer issues.

  • Stratis and VDO have a lot of promise, although it's still a little early. The approach that Stratis has taken is refreshing. It's very simple and reuses lots of already existing stuff so by the time it's released it will already be mature (since the underlying code has been running for many years).

    Once a little more guidance comes out about how to properly use VDO and Stratis together, I'll move my personal stuff to it.

  • So besides the obvious btrfs answer, what about ceph as clustered storage with very fast connectivity?

    There is also BeeGFS, I haven't used it but /r/datahoarders sometimes touts it.

    Not for linux but I have been keeping an eye on M Dillons DragonFly BSD where he has been working on HAMMER2, which is very interesting.

    I don't know much but bcachefs has been making more waves lately also.

    I think the bottom line is that people need to have good backup in place regardless.

  • Does btrfs met your requirements?

    • I've tried btrfs without much luck.

      btrfs still has a write hole for RAID5/6 (the kind I primarily use) [0] and has since at least 2012.

      For a filesystem to have a bug leading to dataloss unpatched for over 8 years is just plain unacceptable.

      I've also had issues even without RAID, particularly after power outages. Not minor issues but "your filesystem is gone now, sorry" issues.

      [0]: https://btrfs.wiki.kernel.org/index.php/RAID56

      8 replies →

    • btrfs is not at all reliable, so if you care about your files staying working files, it probably doesn't meet your requirements. It is like the MongoDB 0.1 of filesystems.

      4 replies →

  • Hardware RAID controllers can do most if not all of these things.

    • I've lost more data in hardware RAID than in ZFS but I have lost data in both.

      Hardware RAID has very poor longevity. Vendor support and battery backup replacement collide in BIOS and host management badly.

      Disclaimer: I work on Dell rackmounts, which means rather than native SAS I am 'Dells hack on SAS' which is a problem and I know its possible to 'downgrade' back to native.

      1 reply →

    • Pay more for less safety and put all your data into the hands of the guy who wrote the firmware for that thing. I'm sure that software is well maintained open source code.

"Don't use ZFS. It's that simple. It was always more of a buzzword than anything else, I feel, and the licensing issues just make it a non-starter for me." - Linus

I have a strong feeling Linus has never actually used ZFS.

  • Probably not, given that statement. There's a reason why nearly everything I deal with today in large enterprise uses ZFS.

  • I also think he took a reasonable licence issue and conflated it with personal opinion not backed by experience. Nobody who has actually run ZFS says its just buzzwords.

    • The license issue is not actually so clear. The actual license is a good one. Oracle itself it the bigger problem.

  • I once read this story about the problems trying to support ZFS - was it in the Linux kernel, though? Can't remember. Sadly, I can't seem to be able to dig it up right now, but the article walking readers through the various clashes in constraints between the different systems and implementations was bordering on the humourous.

He's not wrong. ext4 is actually maintained. This matters. ZFS hasn't kept up with SSDs. ZFS partitions are also almost impossible to resize, which is a huge deal in today's world of virtualized hardware.

Honestly Linus's attitude is refreshing. It's a sign that Linux hasn't yet become some stiff design-by-committee thing. One guy ranting still calls the shots. I love it. Protect this man at all costs.

  • > ZFS hasn't kept up with SSDs.

    Pretty sure this is false. ZFS does support trim (FreeBSD had trim support for quite a while, but ZoL has it now as well), as well as supporting l2arc and zil/slog on ssd.

    > ZFS partitions are also almost impossible to resize

    You can grow zfs partitions just fine (and even online expand). You just can't shrink them.

    • > You just can't shrink them.

      That's not even entirely true, though it requires shuffling around with multiple vdevs temporarily and doesn't presently support raidz. Also vdev removal is primarily made to support an accidental "oops, I added a disk I shouldn't have" rather than removing a long-lived device -- there's no technical restriction against the later case, though the redirect references could hamper performance.

      The official stance has always been to send/receive to significantly change a pool's geometry where it isn't possible online.

      1 reply →

    • I've just last week used btrfs shrink to upgrade to newer Fedora after making a minimal backup. Very useful on for my purposes... I don't plan to look at ZFS until it's in mainline kernel. Having any Linux install media usable as rescue disk is very handy.

  • > ZFS partitions are also almost impossible to resize

    I'm not sure you've actually used ZFS very much as any way I can see you could be meaning this, it is actually pretty straightforward and simple to resize partitions with ZFS pools and volumes within ZFS pools.

    For example, if you mean that you have a root zpool on a device using only half the device, you just have to resize the partition and then turn on `autoexpand` for the pool.

    • We are talking about something resembling adding an extra disk to raid5. Can be easily done in mdadm raid, and then you just need to resize lvm, or whatever you run over it. Can not be done in zfs, not in raid5/6 mode

      1 reply →

  • He is wrong. He's focused on performance; people use ZFS for its features, not its performance.

    • At works we used ZFS w/ snapshots for a container build machine for performance reasons. We had some edge cases that made the Docker copy on write filesystem unsuitable.

  • zfsonlinux added support for TRIM last year. Are you referring to something else?

    • "last year" means it's in very few distributions at this time. Encryption is another feature that is technically supported, but just been added. When I built my NAS last year, I had to use dm-crypt because zfs didn't have it. Some features indeed lag pretty badly in zfs

      2 replies →

I don't blame Linus, but I use ZFS a lot.

I'll drop ZFS the moment I have an alternative with the same features:

- disk management with simple commands that can create raids in any modern configuration

- zero cost snapshots

- import/export (zfs send/recv)

- COW and other data integrity niceties

- compression, encryption, dedup, checksums

I am very grateful to the OpenZFS community, and I think they deserve praises for their work. Saying the code is not maintained is quite unfair.

  • > Saying the code is not maintained is quite unfair

    IMO it's best viewed as a legal positioning of professing ignorance such that Oracle's thugs don't go after him for "copying" ZFS features, knowingly developing software that will be mixed with CDDL code, etc.

    It's similar to how it's not a good idea for an engineer to read patents.

He mentioned that he didn’t think it was being maintained. It’s more or less been formed no?

Has Linus not seen the work that the OpenZFS folks are doing?

ZFS is amazing and I would soon go to a BSD flavor with a fun set of user land utilities than give it up.

  • >Has Linus not seen the work that the OpenZFS folks are doing?

    That's what he meant by Oracle licensing issues. Java API infringement case against Google

  • He also feels ZFS "was always more of a buzzword than anything else". Yikes.

    • Honestly, I wouldn't bash him for this comment. Not everyone runs a 10+ TB array at their home for storage and backup purposes.

      ZFS doesn't primarily target single disks and small arrays anyway. :)

      14 replies →

  • Isn't that beside the point? OpenZFS is still CDDL.

    • I think it is besides the point for the risks of merging anything into Linux. He's right on that topic of course.

      But it is a separate point he made about using zfs in general, and it's certainly not correct if you take one look at the activity in the zfsonlinux project on GitHub.

  • Its pretty clear that Linus simply doesn't have a clue about ZFS. And he just exposed himself as somebody that repeats stuff he read in some linux forum or something.

    There is no way, after any technical evaluation by himself he would come to those conclusions.

Switching to FreeBSD now for my storage. Having a 8TB database setup. Primary DB is running with LVM(cache)/XFS, which gives very satisfying speed, but I really love my secondary mirror DB which storage is on ZFS. I do daily snapshots and daily incremental backups via send/rcv to another ZFS location. No other FS I am aware of provides me with this and that easy functionality. Linus seems to never used ZFS. Although I can understand his issues with the licensing, he is ranting about ZFS. That's a shame.

Alright, Linus, I'll make you a deal: I'll consider dropping ZFS when you ship a production-grade BTRFS (or reiserfs or anything else with the same features).

  • bcachefs is aiming to replace btrfs.

    • Yep:) I'd forgotten about that one, but when it gets merged I will have to seriously consider it! Unfortunately, that's probably years out, so I'm stuck on ZFS for now. I also have some portability concerns (ZFS works on FreeBSD, NetBSD, Illuminos, and Linux, and this was a selling point for me), but I'll probably get over it or at least mostly switch to bcachefs when it goes mainline.

Do you know that FreeBSD is actually a usable modern OS these days? ;) I run a FreeBSD desktop with Nvidia drivers without any issues. OpenJDK works great, among other things. And it supports ZFS natively, and root on ZFS is the default installation option.

  • Unfortunately some popular software (like Docker) doesn't work (afaik?) on FreeBSD which might hold a lot of people back.

ZFS functionality dwarfs the minor issues Linus has with it in my opinion. I find it to be well maintained, and not just bug fixes but new features keep being added as well. If I couldn't use ZFS on linux anymore, I wouldn't hesitate to setup another system just so I could keep using ZFS.

Probably should add Java to that list. The sooner Oracle stops existing, the better.

"Oracle's litigious nature". What a beautifully short and concise phrase. I immediately had to write this down and stash it as an argument for the next time someone at work pushes to go for "Oracle $product" after having received a bottle of wine from their sales team.

Honestly, at this point if I can't get ZFS in Linux I would move to FreeBSD whenever I need big filesystem. How does Linux® Binary Compatibility layer work on FreeBSD?

  • It implements the x86 and x86_64 Linux system call ABI. Linux ELF binaries get vectored to an alternate system call table implemented by the compatibility layer. There are some other components like an implementation of a Linux-compatible procfs. How well it works in practice really depends on how far off the beaten path you go. There are lots of non-essential pieces that are not implemented, but for example I know of people running Steam on FreeBSD.

  • I've run Oracle JRE and OpenJDK with it and both work OK; also some BlackBerry SDK tools built for Linux. I'm sure there's some rough edges, and I don't know about performance, but once I mounted the appropriate filesystems, things were working, and that was good enough for me. I think that you do have to pick between a current release of FreeBSD and 64-bit Linux binaries or an older release of FreeBSD and 32-bit Linux binaries, and no way to support both sizes on the same host; but that might be me misremembering.

  • The main thing I want is for OnlyOffice or Collabora to work on FreeBSD in some capacity and I haven't been able to do it (both have open issues that receive very little attention). I want to run my self hosted office solution on the same machine as the data, and I'd really rather avoid VMs.

    So, I use Linux because Docker and BTRFS work just fine for me use case. I prefer FreeBSD, but unfortunately I'm unable to solve my problems easily with just FreeBSD, so I'm using something else.

Linus is correct in his arguments. As the lead for the linux project, he shouldn't merge things that he feels isn't up to stuff from a license point of view.

That's why we have different software.

If you asked Theo to merge an encryption algorithm for example into OpenSSH and OpenBSD - he's going to have an opinion about it - and that's his thing.

Why would this be controversial at all?

  • Because people like ZFS and Linux so they want to combine the two.

    • Then they can go ahead and combine the two! ZFS on linux has a whole team of maintainers.

      Linus can do what he wants in regards to his branch (which because he's the lead, becomes official Linux), but there's no reason any one else (or any distro) can't do the integration. That's how open source works!

      Of course, whoever does the integration may incur Oracle's wrath. Tread at your own discretion. If those people like it so much that they will put up their own money when Oracle's lawyers come calling, that's completely up to them.

      In my opinion, people who constantly clamour for such things against the technical judgment of open source maintainers are freeloaders. They can propose ideas, but just because the maintainer doesn't want to do it doesn't mean they can scream bloody murder. Just put up your own money and fork it and/or maintain your own fork, which is exactly what the ZFS on linux community is doing - which is the right thing.

      Anyone else can work with the ZFS on linux maintainers to take a bit of the burden on, whether it's rebasing or updating docs on how the integration works, etc. It's a group effort.

Polite reminder that Kent Overstreet is still plugging away at a new copy-on-write FS for Linux called bcachefs. One day, I hope it'll replace ZFS for my uses.

I'm not involved with the project in any way, apart from sending him a few bucks a month on patreon. It's literally the only open source thing I sponsor; it seems like a really worthwhile effort especially considering Linus' advice here...

Perhaps if one wanted to use ZFS they should just use a kernel that supports it?

ZFS is certainly "nice to have" on desktop, but the main use case is going to be servers and NAS. You can use BSD there, it won't bite.

Or do use ZFS, just know that Oracle sucks and you have to go through hoops because of it...

Also, while ZFS for me has been performant, that seems like a silly reason to decide to use it or not use it. I think ZFS pools and snapshots would be among the deciding factors to use it or not.

FWIW, as some other commenters have said, I'd rather drop Linux than drop ZFS. I'm actually only even running Linux on my home server right now because I decided to try Proxmox out on it months ago and it was soooo obscenely easy to install I haven't bothered to reset it yet (though I need to for various reasons; Proxmox itself being the first, ha).

Really all I care about for my host OS these days is the ability to do virtualization and GPU pass-through... Linux is an option but not the only one. Having a robust storage system where drives can fail, exist as one logical drive, are replicated, has snapshots (including RAM), and I literally don't have to worry about it-- that though, that's really only available with ZFS.

  • Oracle has not actually done anything yet. Its the lack of believe in the license.

Yet another user testimony: For simple volumes and snapshots at home I switched from ZFS to BTRFS because ZFS on Fedora was giving me too many issues. It simply wasn't integrated well enough into the system. Had nothing to do with the ZFS FS implementation, merely the packaging.

Either way, BTRFS works for everything I need it to do and it's native.

Typical Linus bullshit ...

"Don't use ZFS. It's that simple. It was always more of a buzzword than anything else, I feel, and the licensing issues just make it a non-starter for me."

Yeah, just use XFS or EXT4 without ANY data consistency ... you can use FAT32 also, similar level filesystem.

Until I have a viable alternative that gives me snapshotting (so I can make consistent backups), that advice is worthless to me.

  • Yeah. There's no decent replacement for ZFS. I use ZFS + KVM + Sanoid + Borg(matic) + Borgbase. With the native encryption and TRIM support added to ZFS in 0.8 there isn't anything close in terms of ease of use.

    Linus seems out of touch on this one IMHO.

  • XFS on LVM thin pool thin LV gives you fas CoW snapshots and is rock solid. Really, try it. :)

    • I don't remember the details, but when I looked into switching to LVM snapshots, I ran into some sort of blocker.

      My use-case is that I run Sandstorm, and want to be able to back it up while it's running. That means:

        - Ensure there aren't any existing snapshots
        - Take a snapshot
        - Mount the snapshot as a filesystem
        - Run tarsnap against that filesystem
        - Release the snapshot
      

      I think the trouble I ran into was at the mount step.

  • in the application-level, snapshotting is not a way to do "consistent backup"s . consistent backup is a backup with a planned or known state when "restore"ing.

    • Sure it is. Quiesce your application, take a snapshot, then resume application. Then you can back up the snapshot. The alternative can be a lengthy downtime for your application.

    • I can't do anything about partial file writes in the general case, but it's close enough—and any ACID databases should be able to restore from such a snapshot.

  • LVM snapshots has been good enough for consistent backups for the last 20 years. Then there is also thin snapshots if you feel fancy.

Maybe offtopic, but I'm impressed by the rest of the conversation that generated that message from Linus: there is a whole thread in which Linus gets to explain with detail various locking mechanism in the kernel, pros and cons, etc.

I think we don't see this normally happening; situations in which the (technical) responsible for the Apple or MS kernels get to answer question and explain with such level of detail.

I think also that is even more interesting than the original blog post that originated that thread. Someone should harvest all these Linus comments and order them in some kind of "lectures" library.

His reasons may not be perfect but he's right.

If you use a filesystem that isn't mainline and it breaks it's all on you to figure it out and fix it. Having used experimental filesystems before and been burned I would rather stick with what I know and won't change overnight.

I'll keep an eye open for new filesystems of course but if it's not mainline unless it's for personal hacking no.

> I own several SBC's (Small Board Computer) that do not run mainline kernels. The company that makes them provides their own patched kernel so if it breaks I'm up creek but I know who to bitch at.

I kind of wish somebody with money took Oracle into court on ZFS and established it doesn't have Oracle taint, so we can move on. Java would be good too but that fight went to the wrong point of law to argue. Opensolaris .. I feel meh about but perhaps it too needs this.

Money doesn't solve all problems, but money can solve legal problems c/f the lawsuits people like newegg do, to get rid of the IPR leeches. (And cloudflare?)

So what would be a viable ZFS alternative on a Linux based NAS I'm planning to build in a few months? I'm currently running Nas4Free on a giant rack sized thing I built using a Mini-ITX Atom board years ago, and ZFS works like a charm, but I also intend to move some day to the ARM architecture which unfortunately the *BSD based NAS software doesnt support (yet). I'm tempted by this smaller hardware in particular: https://wiki.kobol.io/helios64/intro/ So far the only viable option would be Openmediavault which supports ZFS only through external modules which I'd be not entirely comfortable with. I'd only arrange disks as RAID1 pairs however.

  • I use BTRFS and it works fine. Just don't use RAID5/6 and you should be good to go.

    I've also heard success stories using ZFS on Linux, but I haven't bothered because I'd just rather use something that's in the kernel instead of something outside it.

Are there any lawyers that can verify the legal claims made? Anyone can file a lawsuit at anytime for whatever reason, that doesn't mean the lawsuit is valid. Sure using ZFS on your home system or even a small implementation is not a big deal but if there's a company with money lawsuits happen.

I would think the wide availability and usage of ZFS and the fact that this is no longer 20 years ago when companies would sue as if they were 19th century tycoons there must be some sort of "well you let ZFS be used this long without suing you can't just let it be open for so long and then sue when it is profitable" sort of statue in American law. Again I'm not a lawyer and I know enough about law to know I do not know enough about law.

  • To be honest, given Oracle's history, I'm not sure I would even trust a lawyer's opinion, since it could still cost you a lot of money to defend against an Oracle lawsuit even if you win.

Don't go too far people. Linus's criticism against ZFS is concise: buzzword & licensing.

Here, Linus is putting emphasis on license, not technical whatever details of ZFS. He clearly doesn't use ZFS and is not even interested in the problem ZFS solves. He is only "interested" is his (and community's) control over the ZFS source code.

So, his logic basically becomes this:

ZFS is not mainline-able, so veto it until Oracle changes its attitude - a simple old FOSS infestation tactic.

So, please, move on, people. The discussion is not even about file system...

  • Actually what stops mainline integration is the believe of the community in the strength of open source licensing.

mdam + LVM + ext4 does everything I need to do and more. Thin and thick provisioning, ssd caching, snapshots. If I were to use another file system it would be something like ceph or glusterfs.

> and given Oracle's interface copyright suits (see Java)

This seems pretty ironic, given that the whole problem is Linux developers trying to claim and enforce that only other GPL code is allowed to use its APIs (which, IMHO, goes beyond both the intent and letter of the license). The issues aren't exactly the same, but Linux sure seems to be a lot closer to Oracle than to Google here.

Oracle doesn't own all the rights to OpenZFS, people are intentionally adding their own copyright to ensure it stays under a copyleft license, and Oracle can't make it closed source by owning the rights (which they did with the old version).

Title is miss-leading tbh, I know its a direct quote from the article but it takes it out of the context of "I don't want to support your third party code".

Nicely written Linus, very soft and considerate of other's feelings. Not so funny anymore but overall it will make more people happy :)

Both my main servers at my house use ZFS, neither use Linux on bare metal. FreeNAS (multiple ZFS mirrors) and SmartOS running a ZFS mirror.

The situation rules out Linux for many potential applications and side projects I might undertake.

Reminds me of Facebook and React licensing scandal back in 2016/17.

Can never trust someone who easily changes their licenses from private to public. Who knows in future they might change it back to private!?

The Linux kernel broke user space by egregiously decreeing that kernel modules cannot use certain CPU unless they are GPL. Derived work, my ass.

Honestly ext4 is fine for most use cases, even SSDs. If you really need more performance look at HAMMER, it’s meant for high availability. At that point you shouldn’t be running Linux anyway, even with RT_PREEMPT it’s not going to be the most performant for those kinds of RTOS workloads anyway.

  • HAMMER2 is now the default on DragonflyBSD. If I made a bunch of money during the boom and could spend my days doing open source (like Matt Dillon), porting HAMMER2 might be one of the projects I'd pick up.

    • That would be a neat project. It’s hard to set aside the time when you don’t have much in the time bank. I’ve been wanting to do a lot more research on kernel scheduling, writing my own alternative scheduler, and more research on RTOS design and real-time computing in general.

Hi. I ran over a generally excellent Hacker Goatse Security. They have assisted with a ton of issues like Phone Hack, Account Hack, Clear Debts, Grade overhaul E.t.c

Contact: goatsesecure (at) gmail Text/Whatsapp: +1 646 389 4585

So my take on this is that the future is Ceph and you would do better running single node ceph than ZFS or BTRFS.

Linus should stick to his kernel and git. It feels like he has very little knowledge about computing outside of those areas. That would also make his abrasive way of communicating easier to tolerate. He mostly comes across as the typical autistic savant type that's great at an extremely specific area, but only that.

  • No personal attacks on HN, please. Maybe you don't owe Linus better (though why not?), but you owe the community better if you want to post here.

    Also, "typical autistic savant type" breaks the site guideline against calling names. Please don't do that.

    https://news.ycombinator.com/item?id=21089837

    Can you please not? Making an account to be anonymous on HN is fine in principle, but people sometimes start breaking the rules after they do so, and that is not cool.

I'm glad Linus is now acting as legal council for Linux. It's scary that he is implying he's making these decisions without aid to council

ZFS threatens the power of Linux and therefore Linus’ job. That’s the long and short of it. Mac and Windows have been able to maintain stable interfaces for binary kernel drivers for 20 years.

i wouldn't use ZFS either. my guess is 90% of ZFS users have never run failure scenarios and grappled with potential failure modes of ZFS, nor even know that you really need ECC RAM to run ZFS without fear of existential data corruption due to bit flips.

furthermore, the allure of ZFS means people aren't testing their disaster plans until it's too late, bc ZFS is "resilient".

lastly, data recovery is expensive as all hell if even possible. i am talking order of magnitude four figures for 100s of GBs and sketchy probabilities.

ZFS is the ultimate "pet" in the pets vs. cattle continuum. in a world where shoddy engineering and "break things fast" is the zeitgeist, i'm happy to use a classic dumb FS like ext4 and pathologically backing it up and testing said backups.

i would not risk any of my personal treasured data to ZFS due to inherent existential threats. i would implore ZFS users to evaluate and test their setups, and especially use ECC RAM - like, starting now - to protect their assets.

  • > you really need ECC RAM to run ZFS

    This is FUD. ZFS does as well, if not better than the average file system with its focus on integrity, online scrubs etc. On the other hand "use ECC RAM" is standard best practices for any mission critical data, no file system magic is going to fix computer RAM lying to you 100% of the time. Its the standard recommendation for ZFS because its rare to be deployed in environments that can tolerate data corruption.

    > pathologically backing it up and testing said backups.

    ZFS doesn't remove the need for backups and no one seriously makes that arguments. Though snapshots + send/receive make them very easy to do in ZFS.

    • I've detected broken memory chips thanks to BTRFS checksumming finding errors, luckily before it had a chance to corrupt any written data. So if anything, a properly checksummed filesystem makes non-ECC RAM less dangerous.

  • > ZFS is the ultimate "pet" in the pets vs. cattle continuum. in a world where shoddy engineering and "break things fast" is the zeitgeist,

    Live storage is never 'cattle', that is idiotic your running filesystem IS actually a pet. Harddrives are 'cattle' and that's exactly what ZFS treats like 'cattle'.

    ZFS was born out of long frustrations with file system and was systematically designed to protect against data corruption and bad hardware. It is literally the exact opposite of 'move fast and break things'.

    Go and actually watch the videos where the designer show it for the first time. It speaks very clearly about how and why they designed it.

    > i would not risk any of my personal treasured data to ZFS due to inherent existential threats. i would implore ZFS users to evaluate and test their setups, and especially use ECC RAM - like, starting now - to protect their assets.

    ZFS has always recommended ECC to its users. No filesystem can protect you from not having it.

  • > you really need ECC RAM to run ZFS without fear of existential data corruption due to bit flips

    https://arstechnica.com/civis/viewtopic.php?f=2&t=1235679&p=...

    http://www.open-zfs.org/wiki/User:Mahrens

    • The problem is that ZFS doesn't have an offline repair tool. A (granted unlikely) bit flip in an important data structure that gets written to disk makes the whole fs unmountable and that's it (idk if it has a tool to rescue file data from a unmountable pool? Maybe we should ask Gandi...).

      With e.g. ext4 you can get back to a mountable state pretty much guaranteed with e2fsck. You might loose a few files, or find them in lost+found, etc. but at least you have something.

      The reason ZFS doesn't have a offline repair tool is pretty convincing. Once you have zettabytes (that's the marketing) of data, running that repair tool will take too long, so you'd have to do everything to prevent that in the first place anyway. Including checksumming everything, storing everything redundantly and using ECC RAM.

      1 reply →

  • So better to use ext4 and let it silently corrupt your data?

    ZFS does indeed catch memory errors. If you are running without ECC, most filesystems will happily write that corrupt data to disk. Unless the corruption is in the metadata, you will be none the wiser.

  • ZFS has seen me through 6 disk failures since I started using it on Nexenta about 10 years ago; zero data loss.

    It's not a backup by itself, but it makes a fine backup target if it's located somewhere else, since it's both redundant (hard to lose data by accident) and snapshotted (hard to lose data by mistake) - it was my local CrashPlan target (alongside cloud) back when CrashPlan supported home users.

So, is Linus pretty much the same guy? I know he took some time off and the kernel team adopted a code-o-conduct, and had that introspective e-mail ... but now that he's back ... is it any different?

  • Because he didn't attack any individual, use derogatory language, or break the code of conduct?

    He never agreed to roll over and agree with every technological persuasion, he agreed to be nicer to people. This was nice to people, but rude to a technology (ZFS), that seems consistent.

  • This post and a few related ones in the chain seem perfectly fine to me. Does something here seem offensive or harsh to you?

  • Looking at reddit.com/r/linusrants ...

    Most of his worst rants were directed at maintainers who commited changes that resulted in bugs in the kernel.

    The OP of this thread is from a user, so perhaps we need to wait until another big bug gets commited to see the results of his hiatus.