← Back to context

Comment by Jonnax

6 years ago

Here's his reasoning:

"honestly, there is no way I can merge any of the ZFS efforts until I get an official letter from Oracle that is signed by their main legal counsel or preferably by Larry Ellison himself that says that yes, it's ok to do so and treat the end result as GPL'd.

Other people think it can be ok to merge ZFS code into the kernel and that the module interface makes it ok, and that's their decision. But considering Oracle's litigious nature, and the questions over licensing, there's no way I can feel safe in ever doing so.

And I'm not at all interested in some "ZFS shim layer" thing either that some people seem to think would isolate the two projects. That adds no value to our side, and given Oracle's interface copyright suits (see Java), I don't think it's any real licensing win either."

Btrfs crashed for me on two occations, last time, around 2 years back I have installed zfs (which I am using for ~10 years on FreeBSD server) which works like a charm since then.

I understand Linus reasoning but there is just no way I will install btrfs, like ever. I rather dont update kernel (I am having zfs on fedora root with degular kernel updates and scripts which verify that everything is with kernel modules prior to reboot) than use file system that crashed twice in two years.

Yes it is very annoying if update crashes fs, but currently:

- in 2 years two time btrfs crashed itself

- in next 2 years update never broke zfs

As far as I am concerned, the case for zfs is clear.

This might be helpful to someone: https://www.csparks.com/BootFedoraZFS/index.md

Anyway Linus is going too far with his GPL agenda, the MODUL_LICENSE writting kernel moduls explains why the hardware is less supported on linux - instead of devs. focusing on more support from 3rd party companies, they try to force them to do GPL. Once you set MODUL_LICENSE to non GPL, you quickly figure out that you can't use most of kernel calls. Not the code. Calls.

  • The Linux kernel has been released under GPL2 license since day 1, and I don't think that's ever going to change. Linus is more pragmatic than many of his detractors think - he thankfully refused to migrate to GPL3 because the stricter clauses would have scared away a lot of for-profit users and contributors.

    Relaxing on anything more permissive than GPL2 would instead mean the end of Linux as we know it. A more permissive license means that nothing would prevent Google or Microsoft from releasing their own closed-source Linux, or replacing the source code of most of the modules with hex bloats.

    I believe that GPL2 is a good trade-off for a project like Linux, and it's good that we don't compromise on anything less than that.

    Even though I agree on the superiority of ZFS for many applications, I think that the blame for the missed inclusion in the kernel is on Oracle's side. The lesson learned from NTFS should be that if a filesystem is good and people want to use it, then you should make sure that the drivers for that filesystem are as widely available as possible. If you don't do it, then someone sooner or later will reverse engineer the filesystem anyway. The success of a filesystem is measured by the number of servers that use it, not by the amount of money that you can make out of it. For once Oracle should act more like a tech company and less like a legal firm specialised in patent exploitation.

    • The blame is on Oracle side for sure. No question about it.

      > or replacing the source code of most of the modules with hex bloats.

      Ok good point, I am no longer pissed off on MODULE_LICENSE, didn't even thought about that.

  • I agree with the stand btrfs, around same time (2 years back), it crashed on me while I was trying to use it for external hard disk attached to raspberry pi. nothing fancy. since then, I cant tolerate the fs crashes, for a user, its supposed to be one of the most reliable layers.

    • Concerning the BTRFS fs:

      I did use it as well many years ago (probably around 2012-2015) in a raid5-configuration after reading a lot of positive comments about this next-gen fs => after a few weeks my raid started falling apart (while performing normal operations!) as I got all kind of weird problems => my conclusion was that the raid was corrupt and it couldn't be fixed => no big problem as I did have a backup, but that definitely ruined my initial BTRFS-experience. During those times even if the fs was new and even if there were warnings about it (being new), everybody was very optimistic/positive about it but in my case that experiment has been a desaster.

      That event held me back until today from trying to use it again. I admit that today it might be a lot better than in the past but as people have already been in the past positive about it (but then in my case it broke) it's difficult for me now to say "aha - now the general positive opinion is probably more realistic then in the past", due e.g. to that bug that can potentially still destroy a raid (the "write hole"-bug): personally I think that if BTRFS still makes that raid-functionality available while it has such a big bug while at the same time advertising it as a great feature of the fs, the "irrealistically positive"-behaviour is still present, therefore I still cannot trust it. Additionally that bug being open since forever makes me think that it's really hard to fix, which in turn makes me think that the foundation and/or code of BTRFS is bad (which is the reason why that bug cannot be fixed quickly) and that therefore potentially in the future some even more complicated bugs might show up.

      Concerning alternatives:

      I am writing and testing since a looong time a program which ends up creating a big database (using "Yandex Clickhouse" for the main DB) distributed on multiple hosts where each one uses multiple HDDs to save the data and that at the same time is able to fight against potential "bitrot" ( https://en.wikipedia.org/wiki/Data_degradation ) without having to resync the whole local storage each time that a byte on some HDD lost its value. Excluding BTRFS, the only other candidate that I found is ZFSoL that perform checksums on data (both XFS and NILFS2 do checksums but only on metadata).

      Excluding BTRFS because of the reasons mentioned above, I was left only with ZFS.

      I'm now using ZFSoL since a couple of months and so far everything went very well (a bit difficult to understand & deal with at the beginning, but extremely flexible) and performance is as well good (but to be fair that's easy in combination with the Clickhouse DB, as the DB itself writes data already in a CoW-way, therefore blocks of a table stored on ZFS are always very likely to be contiguous).

      On one hand, technically, now I'm happy. On the other hand I do admit that the problems about licensing and the non-integration of ZFSoL in the kernel do have risks. Unluckily I just don't see any alternative.

      I do donate monthly something to https://www.patreon.com/bcachefs but I don't have high hopes - not much happening and BCACHE (even if currently integrated in the kernel) hasn't been in my experience very good (https://github.com/akiradeveloper/dm-writeboost worked A LOT better, but I'm not using it anymore as I don't have a usecase for it anymore, and it was a risk as well as not yet included in the kernel) therefore BCACHEFS might end up being the same.

      Bah :(

  • Btrfs like OCFS is pretty much junk. You can do everything you need to on local disk with XFS and if you need clever features buy a NetApp.

  • Both ZFS and BTRFS are essentially Oracle now. BTRFS was an effort largely from Oracle to copy SUN's ZFS advantages in a crappy way which became moot once their acquired SUN. ZFS also requires (a lot of) ECC memory for reliable operation. It's a great tech, pity it's dying slow death.

    • I’d argue that other file systems also require ECC RAM to maximize reliability. Zfs just makes it much more explicit in their docs and surfaces errors rather than silently handing back any memory corrupted data.

    • ZFS needs ECC just as much as any other file system. That is, it has no way of detecting in memory errors. So if you want your data to actually be written correctly, it's a good idea to use ECC. But the myth that you "need" ECC with ZFS is completely wrong. It would be better if you did have ECC, but don't let that stop you from using ZFS.

      As far as it needing a lot of memory, that is also not true. The ARC will use your memory if it's available, because it's available! You paid good money for it, so why not actually use it to make things faster?

      3 replies →

    • I have examined all the counterarguments against ZFS myself and none of them have been confirmed. ZFS is stable and not RAM-hungry as is constantly claimed. It has sensible defaults, namely to use all RAM that is available and to release it quickly when it is used elsewhere. ZFS on a Raspberry Pi? No problem. I myself have a dual socket, 24 Core Intel Server with 128 GB RAM and a virtual Windows SQL Server instance running on it. For fun, I limited the amount of RAM for ZFS to 40 MB. Runs without problems.

That's his reasoning for not merging ZFS code, not for generally avoiding ZFS.

  • Here are his reasons for generally avoiding ZFS from what I consider most important to least.

    - The kernel team may break it at any time, and won't care if they do.

    - It doesn't seem to be well-maintained.

    - Performance is not that great compared to the alternatives.

    - Using it opens you up to the threat of lawsuits from Oracle. Given history, this is a real threat. (This is one that should be high for Linus but not for me - there is no conceivable reason that Oracle would want to threaten me with a lawsuit.)

    • I'm baffled by such arguments.

      > It doesn't seem to be well-maintained.

      The last commit is from 3 hours ago: https://github.com/zfsonlinux/zfs/commits/master. They have dozens of commits per month. The last minor release, 0.8, brought significant improvements (my favorite: FS-level encryption).

      Or maybe this is referred to the 5.0 kernel (initial) incompatibility? That wasn't the ZFS dev team's fault.

      > Performance is not that great compared to the alternatives.

      There are no (stable) alternatives. BTRFS certainly not, as it's "under heavy development"¹ (since... forever).

      > The kernel team may break it at any time, and won't care if they do.

      That's true, however, the amount is breakage is no different from any other out-of-tree module, and it unlikely to happen with a patch version of a working kernel (in fact, it happen with the 5.0 release).

      > Using it opens you up to the threat of lawsuits from Oracle. Given history, this is a real threat. (This is one that should be high for Linus but not for me - there is no conceivable reason that Oracle would want to threaten me with a lawsuit.)

      "Using" it won't open to lawsuits; ZFS has a CDDL license, which is a free and open-source software license.

      The problem is (taking Ubuntu as representative) shipping the compiled module along with the kernel, which is an entirely different matter.

      ---

      [¹] https://btrfs.wiki.kernel.org/index.php/Main_Page#Stability_...

      86 replies →

    • A former employer was threatened by Oracle because some downloads for the (only free for noncommercial use) VirtualBox Extension Pack came from an IP block owned by the organization. Home users are probably safe, but Oracle's harassment engine has incredible reach.

      14 replies →

    • > There is no conceivable reason that Oracle would want to threaten me with a lawsuit.

      I don't think it has to be conceivable with Oracle...

      Unfortunately I have to agree with Linus on this one. Messing with Oracle's stuff is dangerous if you can't afford a comparable legal team.

      2 replies →

    • > there is no conceivable reason that Oracle would want to threaten me with a lawsuit.

      Money. Anecdotally that's the primary reason Oracle do anything.

      4 replies →

    • "there is no conceivable reason that Oracle would want to threaten me with a lawsuit."

      Don't be so sure about this.

    • None of these are good reasons to purposely hinder the optional use of ZFS as a third party module by users, which is what Linux is doing.

      23 replies →

    • > - Performance is not that great compared to the alternatives.

      CoW filesystems do trade performance for data safety. Or did you mean there are other _stable/production_ CoW filesystems with better performance? If so, please do point them out!

      16 replies →

    • >- Using it opens you up to the threat of lawsuits from Oracle. Given history, this is a real threat. (This is one that should be high for Linus but not for me - there is no conceivable reason that Oracle would want to threaten me with a lawsuit.)

      No. Distributing (ie. precompiled distro with ZFS) will. You are free to run any software on your machine as you so desire.

    • This reminds me of the adaptation of a Churchill quote that "ZFS is the worst of the file systems, except for all others."

  • The problem with ZFS is that it isn't part of Linux kernel.

    Linux project maintains compatibility with userspace software but it does not maintain compatibility with 3rd party modules and for a good reason.

    Since modules have access to any internal kernel API it is not possible to change anything within kernel without considering 3rd party code, if you want to keep that code working.

    For this reason the decision was made that if you want your module to work you need to make it part of Linux kernel and then if anybody refactors anything they need to consider modules they would be affecting by the change.

    Not allowing the module to be part of the kernel is a disservice to your user base. While there are modules like that that are maintained moderately successfully (Nvidia, vmware, etc.) this is all at the cost of the user and userspace maintainers who have to deal with it.

    • It isn't just ZFS. All sorts of drivers get broken because Linux refuses to offer a stable API, saying your code should be in the kernel, but also often refuses to accept drivers into the kernel, even open-source code with no particular quality issues (e.g. quickcam, reiserfsv4).

      Use FreeBSD where there's a stable ABI and you don't have these problems.

      7 replies →

  • And he was doing fine up to that point. For IMO good reasons, ZFS will likely never be merged into Linux. And filesystem kernel modules from third parties have a pretty long history of breakage issues going back to some older Unixes.

    That's going to be plenty of reason not to use ZFS for most people. The licensing by itself is also certainly a showstopper for many.

    But I'm not sure his other comments are really fair and, had Oracle relicensed ZFS n years back, ZFS would almost certainly be shipping with Linux, whether or not as the typical default I can't say. It certainly wasn't just a buzzword and there were a number of interesting aspects to its approach.

  • Well, he says

    > It was always more of a buzzword than anything else, I feel, and the licensing issues just make it a non-starter for me.

    So presumably the licensing problem mentioned by your parent's comment is weighing heavily here. I think this "don't use ZFS" statement is most accurately targeted at distro maintainers. Anyone not actually redistributing Linux and ZFS in a way that would (maybe) violate the GPL is not at any risk. That means even large enterprises can get away with using ZoL.

  • It's exactly that, when combined with the longstanding practice of maintaining compatibility with userspace, but reserving the right to refactor kernel-space code whenever and wherever needed. If ZFS-on-linux breaks in a subtle or obvious way due to a change in linux, he can't afford to care about that - keeping the linux kernel codebase sane while adding new features, supported hardware, optimizations, and fixes at an honestly scary rate, is not that easy.

    See also https://www.kernel.org/doc/html/latest/process/stable-api-no...

    (fuse is a stable user-space API if you want one ... it won't have the same performance and capabilities of course ...)

    • > he can't afford to care about that - keeping the linux kernel codebase sane while adding new features, supported hardware, optimizations, and fixes at an honestly scary rate, is not that easy.

      Maybe, but the complains seem to be more related to the (problematic) changes not being of technical nature accidentally braking ZFS, but being more of political nature. With speculation that it might have been meant to _intentionally_ brake ZFS and then pretend this was a accident because ZFS isn't (and can never) be maintained in tree. Basically on the line of "we don't like out of tree kernel modules so we make the live hard for them". No idea if this is actually the case or people just spin thinks together. Even if it is the case I'm not sure what I should think about, because it's at least partially somewhat understandably.

      1 reply →

  • "Don't use ZFS. It's that simple. It was always more of a buzzword than anything else, I feel, and the licensing issues just make it a non-starter for me."

    When he says that, I think on the $500 million Sun spent on advertising java.

  • Well he had this:

    > as far as I can tell, it has no real maintenance behind it either any more

    Which simply isn't true. They just released a new ZFS version with encryption built in (no more ZFS + LUKS) and they removed the SPL dependency (which didn't support Linux 5.0+ anyway).

    I use ZFS on my Linux machines for my storage and I've been rather happy with it.

    • Same, for at least 6 years in a 4 drive zraid array. It always reads and writes at full gigabit ethernet speeds and I haven't had any downtime other than maintaining FreeBSD updates which are trivial even when going from 10.x to 11 to 12.

      9 replies →

  • Relevant bits:

    "Don't use ZFS. It's that simple. It was always more of a buzzword than anything else, I feel, and the licensing issues just make it a non-starter for me.

    The benchmarks I've seen do not make ZFS look all that great. And as far as I can tell, it has no real maintenance behind it either any more, so from a long-term stability standpoint, why would you ever want to use it in the first place?"

    • > The benchmarks I've seen do not make ZFS look all that great.

      The thing about ZFS that actually appeals to me is how much error-checking it does. Checksums/hashes are kept of both data and metadata, and those checksums are regularly checked to detect and fix corruption. As far as I know it (and filesystems with similar architectures) are the only ones that can actually protect against bit rot.

      https://github.com/zfsonlinux/zfs/wiki/Checksums

      > And as far as I can tell, it has no real maintenance behind it either any more, so from a long-term stability standpoint, why would you ever want to use it in the first place?"

      It has as much maintenance as any open source project: http://open-zfs.org/. IIRC, it has more development momentum behind it than the competing btrfs project.

      3 replies →

    • Linus is just wrong as far as maintenance, as a look at the linux-zfs lists would show.

      From my perspective, it has no real competitor under linux, which is why I use it. I don't consider brtfs mature enough for critical data. (Others can reasonably disagree, I have intentionally high standards for data durability.)

      Aside from legal issues, he's talking out of his ass.

      2 replies →

    • Not sure where that belief comes from. But it might be that many benchmarks are naive and compare it against other filesystems in single-disc setups with zero tuning. Since its metadata overheads are higher, it's definitely slower in this scenario. However, put a pool onto an array of discs and tune it a little, and the performance scales up and up leaving all Linux-native filesystems, and LVM/dm/mdraid, well behind. It's a shame that Linux has nothing compelling to do better than this.

      32 replies →

    • I think speed is not the primary reason many (most?) people use ZFS; I think it's mostly about stability, reliability and maintainability.

> And I'm not at all interested in some "ZFS shim layer" thing either

If there is no "approved" method for creating Linux drivers under licenses other than the GPL, that seems like a major problem that Linux should be working to address.

Expecting all Linux drivers to be GPL-licensed is unrealistic and just leads to crappy user experiences. nVidia is never going to release full-featured GPL'd drivers, and even corporative vendors sometimes have NDAs which preclude releasing open source drivers.

Linux is able to run proprietary userspace software. Even most open source zealots agree that this is necessary. Why are all drivers expected to use the GPL?

---

P.S. Never mind the fact that ZFS is open source, just not GPL compatible.

P.P.S. There's a lot of technical underpinnings here that I'll readily admit I don't understand. If I speak out of ignorance, please feel free to correct me.

  • I am also not an expert in this space - but if I understand correctly the reason the linux Nvidia driver sucks so much is that it is not GPL'd (or open source at all).

    There is little incentive for Nvidia to maintain a linux specific driver, but because it is closed source the community cannot improve/fix it.

    > Why are all drivers expected to use the GPL?

    I think the answer to this is: drivers are expect to use the GPL if they want to be mainlined and maintained - as Linus said: other than that you are "on your own".

    • > drivers are expect to use the GPL if they want to be mainlined and maintained

      I think parent comment wasn't asking for third party, non-GPL drivers to be mainlined, but for a stable interface for out-of-tree drivers.

      2 replies →

    • I would expect a large fraction of Nvidia's GPU sales to be from customers wanting to do machine learning. What platform do these customers typically use? Windows?

      How do the Linux and Windows drivers compare on matters related to CUDA?

      3 replies →

    • You make it sound like the idea is "if you GPL your driver, we'll maintain it for you", which is kinda bullshit. For one, kernel devs only really maintain what they want to maintain. They'll do enough work to make it compile but they aren't going to go out of their way to test it. Regressions do happen. More importantly though, they very purposefully do no maintain any stability in the driver ABI. The policy is actively hostile to the concept of proprietary drivers.

      Which is really kinda of hilarious considering that so much modern hardware requires proprietary firmware blobs to run.

  • > Expecting all Linux drivers to be GPL-licensed is unrealistic and just leads to crappy user experiences. nVidia is never going to release full-featured GPL'd drivers, and even corporative vendors sometimes have NDAs which preclude releasing open source drivers.

    Nvidia is pretty much the only remaining holdout here on the hardware driver front. I don't see why they should get special treatment when the 100%-GPL model works for everyone else.

  • ZFS is not really GPL-incompatible either, but it doesn't matter. Between FUD and Oracle's litigiousness, the end result is that there is no way to overcome the impression that it is GPL-incompatible.

    But it is a problem that you can't reliably have out-of-tree modules.

    Also, Linus is wrong: there's no reason that the ZoL project can't keep the ZFS module in working order, with some lag relative to updates to the Linux mainline, so as long as you stay on supported kernels and the ZoL project remains alive, then of course you can use ZFS. And you should use ZFS because it's awesome.

    • > But it is a problem that you can't reliably have out-of-tree modules.

      That is the bit I'm trying to get at. Yes it would be best if ZFS was just part of Linux, and maybe some day it can be after Oracle is dead and gone (or under a new leadership and strategy). But it's almost beside the point.

      Every other OS supports installing drivers that aren't "part" of the OS. I don't understand why Linux is so hostile to this very real use case. Sure it's not ideal, but the world is full of compromises.

      3 replies →

  • There's a unique variable here and that's Oracle.

    That shouldn't actually matter; it should just depend on the license. But millions in legal fees says otherwise.

  • >If there is no "approved" method for creating Linux drivers under licenses other than the GPL, that seems like a major problem that Linux should be working to address.

    As a Linux user and an ex android user, I absolutely disagree and would add that the GPL requirement for drivers is probably the biggest feature Linux has!

    • Yes, the often times proprietary android linux driver for are such a pain. Not only make they it harder to reuse the hardware outside of android (e.g. in a laptop or similar). But they also tend to cause delays with android updates and sometimes make it impossible to update a phone to a newer android version even if the phone producer wants to do so.

      Android did start making this less of problem with HAL and stuff, but it's still a problem, just a less big one.

  • There is a big difference between a company distributing a proprietary Linux driver, and the linux project merging software of a gpl incompatible license. In the first case it is the linux developers who can raise the issue of copyright infringement, and it is the company that has to defend their right to distribute. In the later the roles are reversed with the linux developers who has to argue that they are within compliance of the copyright license.

    A shim layer is a poor legal bet. It assumes that a judge who might not have much technical knowledge will agree that by putting this little technical trickery between the two incompatible works then somehow that turn it from being a single combined work into two cleanly separated works. It could work, but it could also very easily be seen as meaningless obfuscation.

    > Why are all drivers expected to use the GPL

    Because a driver is tightly depended on the kernel. It is this relationship that distinguish two works from a single work. A easy way to see this is how a music video work. If a create a file with a video part and a audio part, and distribute it, legally this will be seen as me distributing a single work. I also need to have additional copyright permission in order to create such derivative work, rights that goes beyond just distributing the different parts. If I would argue in court that I just am distributing two different works then the relationship between the video and the music would be put into question.

    A userspace software is generally seen as independent work. One reason is that such software can run on multiple platforms, but the primary reason is that people simply don't see them as an extension of the kernel.

  • There is an "approved" method - write an publish your own kernel module. However if your module is not GPL licensed it cannot be published in the linux kernel itself, and you must keep up with the maintenance of the code. This is a relatively fair requirement imo.

    • ...which is what the ZFS on Linux team are doing?

      The issue here is which parts of the kernel API are allowed for non-GPL modules has been decided to be a moving target from version to version, which might as well be interpreted as "just don't bother anymore".

      1 reply →

  • > If there is no "approved" method for creating Linux drivers under licenses other than the GPL, that seems like a major problem that Linux should be working to address.

    It's a feature, not a bug. Linux is intentionally hostile to binary-blob drivers. Torvalds described his decision to go with the GPLv2 licence as the best thing I ever did. [0]

    This licensing decision sets Linux apart from BSD, and is probably the reason Linux has taken over the world. It's not that Linux is technically superior to FreeBSD or OpenSolaris.

    > Expecting all Linux drivers to be GPL-licensed is unrealistic and just leads to crappy user experiences

    'Unrealistic'? Again, Linux took over the world!

    As for nVidia's proprietary graphics drivers, they're an unusual case. To quote Linus: I personally believe that some modules may be considered to not be derived works simply because they weren't designed for Linux and don't depend on any special Linux behaviour [1]

    > Why are all drivers expected to use the GPL?

    Because of the 'derived works' concept.

    The GPL wasn't intended to overreach to the point that a GPL web server would require that only GPL-compatible web browsers could connect to it, but it was intended to block the creation of a non-free fork of a GPL codebase. There are edge-cases, as there are with everything, such as the nVidia driver situation I mentioned above.

    [0] https://en.wikipedia.org/w/index.php?title=History_of_Linux&...

    [1] https://en.wikipedia.org/w/index.php?title=Linux_kernel&oldi...

  • > If there is no "approved" method for creating Linux drivers under licenses other than the GPL, that seems like a major problem that Linux should be working to address.

    The problem is already addressed: if someone wants to contribute code to the project then it's licensing must be compatible with the prior work contributed to project. That's it.

    • But why are all drivers expected to be "part of the project"? We don't treat userspace Linux software that way. We don't consider Windows drivers part of Windows.

      18 replies →

  • > If there is no "approved" method for creating Linux drivers under licenses other than the GPL, that seems like a major problem that Linux should be working to address.

    It's less a think Linux can work on then a think lawmakers/courts would have to make binding decisions on, which would make it clear if this usage is Ok or not. But in practice this can only be decided on a case-by-case basis.

    The only way Linux could work on this is by:

    1. Adding a exception to there GPL license to exclude kernel modules from GPL constraints (which obviously won't happen for a bunch of reasons).

    2. Turn Linux into a micro kernel with user-land drivers and interfaces for that drivers which are not license encumbered (which again won't happen because this would be a completely different system)

    3. Oracle re-licensing ZFS under a permissible Open Source license (e.g. dual license it, doesn't need to be GPL, just GPL compatible e.g. Apache v2). Guess, what that won't happen either, or at last I would be very surprised. I mean Oracle is running out of products people _want_ to buy from them and increasingly run into an area where they (ab-)use the license/copyright/patent system to earn their monny and force people to buy there products (or at last somehow pay license fees to them).

  • >[...] that seems like a major problem that Linux should be working to address [...] Why are all drivers expected to use the GPL?

    Vendors are expected to merge their drivers in mainline because that is the path to getting a well-supported and well-tested driver. Drivers that get merged are expected to use a GPL2-compatible license because that is the license of the Linux kernel. If you're wondering why the kernel community does not care about supporting an API for use in closed-source drivers, it's because it's fundamentally incompatible with the way kernel development actually works, and the resulting experience is even more crappy anyway. Variations of this question get asked so often that there are multiple pages of documentation about it [0] [1].

    The tl;dr is that closed-source drivers get pinned to the kernel version they're built for and lag behind. When the vendor decides to stop supporting the hardware, the drivers stop being built for new kernel versions and you can basically never upgrade your kernel after that. In practice it means you are forced to use that vendor's distro if you want things to work properly.

    >[...] nVidia is never going to release full-featured GPL'd drivers.

    All that says to me is that if you want your hardware to be future-proof, never buy nvidia. All the other Linux vendors have figured out that it's nonsensical to sell someone a piece of hardware that can't be operated without secret bits of code. If you ever wondered why Linus was flipping nvidia the bird in that video that was going around a few years ago... well now you know.

    [0]: https://www.kernel.org/doc/html/latest/process/kernel-driver...

    [1]: https://www.kernel.org/doc/html/latest/process/stable-api-no...

  • > Linux is able to run proprietary userspace software. Even most open source zealots agree that this is necessary. Why are all drivers expected to use the GPL?

    To answer your excellent question (and ignore the somewhat unfortunate slam on people who seem to differ with your way of thinking), it is an intentional goal of software freedom. The idea of a free software license is to allow people to obtain a license to the software if they agree not to distribute changes to that software in such a way so that downstream users have less options than they would with the original software.

    Some people are at odds with the options available with licenses like the GPL. Some think they are too restrictive. Some think they are too permissive. Some think they are just right. With respect to you question, it's neither here nor there if the GPL is hitting a sweet spot or not. What's important is that the original author has decided that it did and has chosen the license. I don't imagine that you intend to argue that a person should not be able to choose the license that is best for them, so I'll just leave it at that.

    The root of the question is "What determines a change to the software". Is it if we modify the original code? What if we add code? What if we add a completely new file to the code? What if we add a completely new library and simply link it to the code? What if we interact with a module system at runtime and link to the code that way?

    The answers to these questions are not well defined. Some of them have been tested in court, while others have not. There are many opinions on which of these constitutes changing of the original software. These opinions vary wildly, but we won't get a definitive answer until the issues are brought up in court.

    Before that time period, as a third party who wishes to interact with the software, you have a few choices. You can simply take your chances and do whatever you want. You might be sued by someone who has standing to sue. You might win the case even if you are sued. It's a risk. In some cases the risk is higher than others (probably roughly ordered in the way I ordered the questions).

    Another possibility is that you can follow the intent of the original author. You can ask them, "How do you define changing of the software". You may agree with their ideas or not, but it is a completely valid course of action to choose to follow their intent regardless of your opinion.

    Your question is: why are all drivers expected to use the GPL? The answer is because drivers are considered by the author to be an extension of the software and hence to be covered by the same license. You are absolutely free to disagree, but it will not change the original author's opinion. You are also able to decide not to abide by the author's opinion. This may open you up to the risk of being sued. Or it may not.

    Now, the question unasked is probably the more interesting question. Why does Linus want the drivers to be considered an extension of the original software? I think the answer is that he sees more advantages in the way people interact in that system than disadvantages. There are certainly disadvantages and things that we currently can't use, but for many people this is not a massive hardship. I think the question you might want to put to him is, what advantages have you realised over the years from maintaining the license boundaries as they are? I don't actually know the answer to this question, but would be very interested to hear Linus's opinion.

    • Sorry for using the term "zealots", I didn't intend it as a pejorative. I should probably have said "hardliners". I meant only to refer to people at the extreme end of the spectrum on this issue.

      > The root of the question is "What determines a change to the software". [...] The answers to these questions are not well defined.

      And that's fair, but what confuses me is that I never see this question raised on non-Linux platforms. No one considers Windows drivers a derivative of Windows, or Mac kernel extensions a derivative of Darwin.

      Should the currently-in-development Windows ZFS port reach maturity and gain widespread adoption (which feels possible!), do you foresee a possibility of Oracle suing? If not, why is Linux different?

      2 replies →

This is nonsense. The problem is not getting ZFS bundled with Linux like he is implying here. The problem is how Linux artificially restricts what APIs your module is able to access based on the license, so you wouldn't be able to use ZFS even by your own prerogative like he is suggesting.

He is claiming that it comes down to the user's choice, which would be just fine if that were true. The only problem here is that Linux has purposely taken steps to hinder that choice.