Comment by snovymgodym
7 days ago
> Is there any reason to use VMS today other than for existing applications that cannot be migrated?
No, there is no reason to do a greenfield VMS deployment and hasn't been for a long time.
> I've heard its reliability is legendary, but I've never tried it myself.
I've heard the same things but I am doubtful as to their veracity in a modern context. Those claims sound like they come from an era where VMS was still a cutting-edge and competitive product. I'm sure VMS on vaxclusters had impressive reliability in the 1980s, but I doubt it's anything special today. If you look at the companies and institutions that need performance and high reliability today (e.g. Hyperscaler companies or the TOP500) they are all using the same thing: Linux on clusters of x86-64 machines.
Hyperscaler companies or the TOP500 don't need high reliability, especially the latter.
With cloud computing reliability is achieved through software, distributed software which needs to be horizontal.
On a mainframe reliability is achieved through hardware (at least as fast as user software is concerned), and the software is vertical.
If you need to run vertical, single-system image software, the cloud is useless for making it reliable.
Systems built on the cloud are reliable only insofar as people can write reliable distributed systems which assume components will fail. It is not reliable if you can't, or don't want to write software like that (which carries a significant engineering cost).
The real reason to avoid mainframes (and VMS) is vendor lock-in, not technological.
VMS systems give you literal decades of uptime. Google, Amazon and other large providers regularly have outages, all the time. They are usually small in duration and localized.
They are completely different mental models and ways of thinking about the problem of reliability and uptime.
VMS(and IBM/360 and other large compute systems) will almost certainly give you stronger uptime guarantees than any modern compute stack, but almost nobody needs uptimes measured in literal decades.
The Hyperscaler/TOP500 computing needs are not optimized for reliability in the same ways OpenVMS does.
I think you're half right.
On one hand, I don't see many of the modern services having years to decades of uptime. Clustering is also bolted onto many products while not available for most products. These were normal for OpenVMS deployments. Seems like a safer bet in that regard.
If people have $$$, which VMS requires for such goals, they can hire the type of sysadmins and programmers who can do the same in Nix' systems. The number of components matching VMS's prior advantages increases annually. Also, these are often open source with corresponding advantages for maintenance and extensions.
The other thing I notice is VMS systems appear to be used in constrained ways compared to how cloud companies use Linux. It might be more reliable because users stay on the happy path. Linux apps keep taking risks to innovate. FreeBSD is a nice compromise for people wanting more stability or reliability with commodity hardware.
Then, you have operating systems whose designs far exceed VMS in architectural reliability. INTEGRITY RTOS, QNX, and LynxOS-178B come to mind. People willing to do custom, proprietary systems are safer building on those.
Hey, something like twenty are not x86-64 based :) With ARM Fugaku at the top a couple years ago.