Comment by jamesy0ung
7 days ago
Is there any reason to use VMS today other than for existing applications that cannot be migrated? I've heard its reliability is legendary, but I've never tried it myself. The 1 year licensed VM seems excessively annoying. Is it just old and esoteric, or does it still have practical use? At least with Linux, multiple vendors release and support distros and it is mainstream, whereas with VMS, you'd be stuck with VSI.
In modern times, we have taken the everything breaks all the time, make redundancy and failover cheap/free approach.
VMS(and the hardware it runs on) takes the opposite approach. Keep everything alive forever, even with hardware failures.
So the VMS machines of the day had dual redundant everything, including interconnected memory across machines and SCSI interconnects and everything you could think of was redundant.
VMS clusters could be configured in a hot/hot standby situation, where 2 identical cabinets full of redundant hardware could failover during an instruction and keep going. You can't do that with the modern approach. The documentation was an entire wall of office bookcase almost clear full of books. There was a lot of documentation.
These days, nothing is redundant inside the box level usually, we instead duplicate the boxes and make them cheap sheep, a dime a dozen.
Which approach is better? That's a great question. I'm not aware of any academic exercises on the topic.
All that said, most people don't need decade long uptimes. Even the big clouds don't bother with trying to get to decade long uptimes, as they regularly have outages.
One of the things that blew my mind in my early career was seeing my mentor open the side of a VMS machine (I can’t remember the hardware model sorry) and slide out a giant board of RAM, and then slide in another board the same physical size but it had a CPU on it, and then enable the CPU
The daughter board in that machine could have RAM or CPUs in the same slot and it was changeable without reboots!
Exactly! One would never, ever do that with x86.
3 replies →
I was actually surprised to see that there's been a release in the last 12 months - I had thought it was dead.
I used it extensively in the late 90's early 00's and really liked it. As a newb sysadmin at the time, the built-in versioning on the fs saved me from more than one self-inflicted fsck up.
I can't imagine there would be any green-field deployments in the last 10 years or so - I'm guessing it's just supporting legacy environments.
> I can't imagine there would be any green-field deployments in the last 10 years or so - I'm guessing it's just supporting legacy environments.
This is not entirely the case.
I have been writing about VMS for years. The first x86-64 edition, version 9, was released in 2020:
https://www.theregister.com/2022/05/10/openvms_92/
Version 9.0 was essentially a test. 9.1 in 2021 was another test and v9.2 in 2022 was production-ready.
There's no new Itanium or Alpha hardware, and version 8.x runs on nothing else. Presumably v9.x is selling well enough to keep the company alive because it's been shipping new versions for a while now.
Totally new greenfield deployments? Probably few. But new installs of the new version, surely, yes, because VMS 9 doesn't run on any legacy kit, so these must be new deployments.
It's been growing for a few years. Maybe not growing much but a major new version and multiple point releases means somebody is buying it and deploying it. Never mind no new deployments in a decade... more new deployments in the last few years than in the previous decade.
> I had thought it was dead.
HP tried to kill it. Somewhere in the neighborhood of 10 years ago they announced the EOL. This company - VMS Software Inc (VSI) was formed specifically to buy the rights and maintain/port it. So you have an interesting situation.
Old VAX and Alpha systems are supported, supposedly indefinitely, but if you have an Itanium system it has to be newer than a certain age. HP didn’t sell the rights to support the older Itaniums, and no longer issues licenses for them. So there is a VMS hardware age gap. Really old is ok. Really new is ok.
It's now ported to x86 as well, so you can probably just order a Dell box and install OpenVMS on it.
2 replies →
MCP and MVS (now called z/OS) are all still supported. Not sure whether MCP still receives updates though.
> Not sure whether MCP still receives updates though.
MCP Release 21 came out in mid-2023, and release 22 is supposed to be out middle of this year, with further releases planned: https://www.unisys.com/siteassets/microsites/clearpath-futur...
Looking at new features, they seem to be mainly around security (code signing, post quantum crypto) and improved support for running in cloud environments (with the physical mainframe CPU replaced by a software emulator)
Unisys’ other mainframe platform, OS 2200 is still around too, and seems to follow a similar release schedule - https://www.unisys.com/siteassets/microsites/clearpath-futur... - although I get the impression there are more MCP sites remaining than OS 2200 sites?
4 replies →
Right, but z/OS is part of a larger longer-running hardware strategy that, with virtualization, serves the needs of mixed-OS workloads and multi-decade tenures overseeing 24/7 systems.
The corpse of OpenVMS on the other hand is being reanimated and tinkered with, presumably paid for by whatever remaining support contracts exist, and also presumably to keep the core engineers occupied with inevitably fruitless busywork while occasionally performing the contractually required on-call technomancy on the few remaining Alpha systems.
VMS is dead... and buried, deep.
It's a shame it can't be open-sourced, just like Netware won't be open-sourced, and probably has less chance of being used for new projects than RiscOS or AmigaOS.
3 replies →
It's fun for hobbyists! The first multi-user system I used happened to be a VAX/VMS system, so it brings me back to my youth. I have a VAX running in simh, complete with compilers. The release, VMS 7.3, is almost 25 years old.
There is a pretty good amount of operation-critical stuff like power plants (especially nuclear), radars, traffic management and various finance/insurance/airline services that operate on VMS afaik. Something about very reliable cluster-native operations, or so it would seem. 90's cloud-native.
No. Most of the good stuff was lifted into Windows NT decades ago. The rest has been far surpassed over the same time period by Linux and others. A few cool things probably fell into the cracks, but that's common in the industry.
It's interesting in a "what if/parallel universe" kind of way, but I certainly wouldn't touch it for anything new with that licensing.
Licensing cost is, indeed, a major deterrent for any green field project planning to run VMS. The same applies to any proprietary operating system, however, be it AIX, IBMi, z/OS, MCP, HP-UX, or Windows. I don't think there are many new work going on any of these platforms.
I used it a bit at University - most notably it had an Occam system on it that wasn't available on the Sun workstations.
I'm curious about running a VMS system although the admin side looks a bit daunting. The thing I'd really like to do is run X-Windows on an emulator on my home lab, just to see it run.
I had an Alpha AXP 150 workstation on my desk for a while, it ran X11 applications fine with no source changes required.
> Is there any reason to use VMS today other than for existing applications that cannot be migrated?
No, there is no reason to do a greenfield VMS deployment and hasn't been for a long time.
> I've heard its reliability is legendary, but I've never tried it myself.
I've heard the same things but I am doubtful as to their veracity in a modern context. Those claims sound like they come from an era where VMS was still a cutting-edge and competitive product. I'm sure VMS on vaxclusters had impressive reliability in the 1980s, but I doubt it's anything special today. If you look at the companies and institutions that need performance and high reliability today (e.g. Hyperscaler companies or the TOP500) they are all using the same thing: Linux on clusters of x86-64 machines.
Hyperscaler companies or the TOP500 don't need high reliability, especially the latter.
With cloud computing reliability is achieved through software, distributed software which needs to be horizontal.
On a mainframe reliability is achieved through hardware (at least as fast as user software is concerned), and the software is vertical.
If you need to run vertical, single-system image software, the cloud is useless for making it reliable.
Systems built on the cloud are reliable only insofar as people can write reliable distributed systems which assume components will fail. It is not reliable if you can't, or don't want to write software like that (which carries a significant engineering cost).
The real reason to avoid mainframes (and VMS) is vendor lock-in, not technological.
VMS systems give you literal decades of uptime. Google, Amazon and other large providers regularly have outages, all the time. They are usually small in duration and localized.
They are completely different mental models and ways of thinking about the problem of reliability and uptime.
VMS(and IBM/360 and other large compute systems) will almost certainly give you stronger uptime guarantees than any modern compute stack, but almost nobody needs uptimes measured in literal decades.
The Hyperscaler/TOP500 computing needs are not optimized for reliability in the same ways OpenVMS does.
I think you're half right.
On one hand, I don't see many of the modern services having years to decades of uptime. Clustering is also bolted onto many products while not available for most products. These were normal for OpenVMS deployments. Seems like a safer bet in that regard.
If people have $$$, which VMS requires for such goals, they can hire the type of sysadmins and programmers who can do the same in Nix' systems. The number of components matching VMS's prior advantages increases annually. Also, these are often open source with corresponding advantages for maintenance and extensions.
The other thing I notice is VMS systems appear to be used in constrained ways compared to how cloud companies use Linux. It might be more reliable because users stay on the happy path. Linux apps keep taking risks to innovate. FreeBSD is a nice compromise for people wanting more stability or reliability with commodity hardware.
Then, you have operating systems whose designs far exceed VMS in architectural reliability. INTEGRITY RTOS, QNX, and LynxOS-178B come to mind. People willing to do custom, proprietary systems are safer building on those.
Hey, something like twenty are not x86-64 based :) With ARM Fugaku at the top a couple years ago.
That is the kind of comment that a well run bulletin board would moderate. Then again, there are probably not enough VMS systems people left to really have an r-war (short of architecture war).
VMS' key feature over Unix is consistency and beyond that it is being interrupt driven at all levels (no wasted cycled polling except for code ported over using POSIX interfaces). VMS was killed by a confluence of business issues, not because it was obsolete or inefficient.
I had a job at a place in college, back 1997–2000, that was run by a big DEC Alpha server running VMS. VMS was dying then.
I was just a lowly kid programmer working on a side project, so I can't tell you whether it's still uniquely good at something to justify its usage today. It worked. But it was weird and arcane (not that Unix isn't, but Unix won) and using it today for a new project would come with a lot of friction.
VAX/VMSCluster was like the Kubernetes of the 1980s. Lots of features that appeared in k8s decades later were baked into VMS.
https://en.wikipedia.org/wiki/VMScluster