← Back to context

Comment by Annatar

10 years ago

> Citation needed, again. Zones are great. I like Zones a lot. But, Linux has containers; LXC is not virtualization, it is a container, just like Zones. Zones has some smarts for interacting with ZFS filesystems and that's cool and all, but a lot of the same capabilities exist with LVS and LXC.

How about simple logic instead? I know zones work, because they have been in use in the enterprises since 2006, and they are easy to work with and reason about; if I have the same body of software available on a system with the original lightweight virtualization as I do on Linux, and my goal is data integrity, self-healing, and operational stability, what is my incentive to running a conceptual knock-off copy of zones, LXC? To me, the choice is obvious: design the solution on top of the tried and tested, original substrate, rather than use a knock-off, especially since the acquisition cost of both is zero, and I already know from experience that investing in zones pays profit and dividends down the road, because I ran them before in production environments. I like profits, and the only thing I like better than engineering profits are engineering profits with dividends. That, and sleeping through my nights without being pulled into emergency conference calls about some idiotic priority 1 incident. Incident which could have easily been avoided altogether, if I had been running on SmartOS with ZFS and zones. Based on multiple true stories, and don't even get me started on the dismal redhat "support", where redhat support often ends up in a shootout with customers[1], rather than fixing customer's problems, or being honest and admitting they do not have a clue what is broken where, nor how to fix it.

> And, it's not Linux' fault the systems you manage are stuck on ext4. There are other filesystems for Linux; XFS+LVM is great.

Did you know that LVM is an incomplete knock-off of HP-UX's LVM, which in turn is a licensed fork of Veritas' VxVM? Again, why would I waste my precious time, and run up financial engineering costs running a knock-off, when I can just run SmartOS and have ZFS built in? The logic does not check out, and financial aspects even less so.

On top of that, did you know that not all versions of the Linux kernel provide LVM write barrier support? And did you know that not all versions of the Linux kernel provide XFS write barrier support (XFS at least will report that, while LVM will do nothing and lie that the I/O made it to stable storage, when it might still be in transit)? And did you know that to have both XFS and LVM support write barriers, one needs a particular kernel version, which is not supported in all versions of RHEL? And did you know that not all versions of LVM correctly support mirroring, and that for versions which do not require a separate logging device, the log is in memory, so if the kernel crashes, one experiences data corruption? And did you know that XFS, as awesome as it is, does not provide data integrity checksums?

And we haven't even touched upon systemd knock-off of SMF, nor have we touched upon lack of fault management architecture, nor have we touched upon how insane bonding of interfaces is in Linux, nor have we touched upon how easy it is to create virtual switches, routers and aggregations (trunks in CISCO parlance) using Crossbow in Solaris/illumos/SmartOS... when I wrote that there is enough material for a book, I was not trying to be funny.

[1] http://bugzilla.redhat.com/