Windows Server 2025 Runs Better on ARM

3 days ago (jasoneckert.github.io)

Does Windows on ARM use VBS/Virtualization Based Security, and does ARM support nested virtualization to do so in a VM, too? Does it employ costly CPU vulnerability mitigation techniques that might hit two times in a VM (unless the Hypervisor is adequately set up, which I'd hope is the default for Hyper-V)? Those two things account for most of the common performance problems observed when putting modern Windows in a VM. I'd love to know more about it, but the article does not seem to mention either.

>Across multiple runs of each test, the Snapdragon system produced consistent, repeatable timings nearly every time. On the Intel system, results varied significantly, occasionally beating the Snapdragon, but most of the time falling behind. The Snapdragon was the clear winner on each test overall.

They blogged everything to generate the setup, including the hunch and test code but the anecdotal results are missing. It's a little suspect. How much faster is ARM??

  • I intentionally left out screenshots of the output for a couple of reasons:

    1) They’d distract from the main point (I wasn’t aiming to write a benchmarking post), and

    2) They can be misleading, since results will vary across ARM hardware and even between Snapdragon X Elite variants.

    Instead, I included the PowerShell snippets so anyone interested can reproduce the results themselves.

    For a rough sense of the outcome: the Snapdragon VM outperformed the Intel VM by ~20–80%, depending on the test (DNS ~20%, IIS ~50%, all others closer to ~80%).

    • You likely tripped over a difference in power management profiles (and capabilities) between Intel and ARM.

      You're testing "variability" and latency, and you even mention that "modern Intel CPUs tend to ramp frequency..." but entirely neglect to mention which specific Windows Power Profile you were using.

      Fundamentally, you're benchmarking a server operating system on laptops and/or desktop-class hardware, and not the same spec either. I.e.: you're not controlling for differences in memory bandwidth, SSD performance, etc...

      Even on server hardware the power profiles matter! A lot more than you think!

      One of my gimmicks in my consulting gig is to change Intel server power settings from "Balanced" to "Maximum Performance" and gloat as the customer makes the Shocked Pikachu face because their $$$ "enterprise grade server" instantly triples in performance for the cost of a button press.

      Not to mention that by testing this in VMs, you're benchmarking three layers: The outer OS (and its power management), the hypervisor stack, and the inner guest OS.

      6 replies →

Windows developer here. After reading this post, my gut instinct is that this is due to something called 'segment heap'.

A bit of backstory: there are two, totally independent implementations behind the Windows heap allocation APIs (i.e. the implementation code behind RtlHeapAlloc and RtlHeapFree, which are called by malloc/free). The older of the two, developed uring the Dave Cutler era, is known as the "NT heap". The newer implementation, developed in the 2010s, is known as "segment heap". This is all documented online if anyone wants to read more. When development on segment heap was completed, it was known to be superior to the NT heap in many ways. In particular, it was more efficient in terms of memory footprint, due to lower fragmentation-related waste. Segment heap was smarter about reusing small allocations slots that were recently free'd. But, as ever, Windows was very serious about legacy app compat. Joel Spolsky calls this the 'Raymond Chen camp'. So, they didn't want to turn segment heap on universally. It was known that a small portion of legacy software would misbehave and do things like, rely on doing a bit of use-after-free as a treat. Or worse, it took dependencies on casting addresses to internal NT heap data structures. So, the decision at the time was to make segment heap the default for packaged executables. At that time, Windows Phone still existed, and Microsoft was pushing super hard on the Universal platform being the new, recommended way to make apps on Windows. So they thought we'd see a gradual transition from unpackaged executables to packaged, and thus, a gradual transition from NT heap to segment heap. The dream of UWP died, and the Windows framework landscape is more fragmented than ever. Most important software on Windows is still unpackaged, and most of it runs on x64.

Why does this matter? Because segment heap is also enabled by default on arm. Same logic as the packaged vs unpackaged decision. Arm64 binaries on Windows are guaranteed not to be ancient, unmaintained legacy code. Arm64 windows devices have been a big success, and users widely report that they feel more responsive than x64 devices.

A not insignificant part of why Windows feels better on arm is because segment heap is enabled by default on arm.

I'd be interested to see how this test turns out if you force segment heap on x64. You can do it on a per-executable basis via creating a DWORD value named FrontEndHeapDebugOptions under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\<myExeName>.exe, and giving it a value of 8.

You can turn it on globally for all processes by creating a DWORD value named "Enabled" under HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Segment Heap, and giving it a value of 3. I do this on my dev machine and have encountered zero problems. The memory footprint savings are pretty crazy. About 15% in my testing.

  • > You can turn it on globally for all processes by creating a DWORD value named "Enabled" under HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Segment Heap, and giving it a value of 3

    I had previously seen this described as 0 vs non-zero. Since you have some inside experience :), anything special about 3 instead? What about 2? How would I find these value meanings out on my own (if that's even possible)?

    Thanks!

    • It's a combinination of bit flags. The lowest bit controls whether segment heap is on or off. The 2nd lowest bit bit controls some additional optimizations that go along with it, something about multithreading. A value of 3 (both flags set) gives you identical behavior to what specifying <heapType>SegmentHeap</heapType> in your application manifest does.

      Using the application manifest approach is the right way to ship software that opts into segment heap. The registry thing is just a convenience for local testing.

> Like many ARM systems, it doesn’t chase high boost clocks and instead delivers steady, sustained performance

Maybe not boost clocks, but every arm system I've used supports some form of frequency scaling and behaves the same as any x86 machine I've used in comparison. The only difference is how high can you go... /shrug

  • I am not sure how workload specific it is, but in cloud compute in organizations I've worked there's often been a substantial savings outlay from literally just switching from x86 machines for workloads to ARM machines, with no other changes. So it's usually twofold, and a combination of both (lower price for the instance, but also better efficiency as well). In one organization in particular of recent memory we were doing dynamic autoscaling of hundreds of kubernetes nodes simultaneously and were able to project / achieve about 15% conservatively. Just from going x86 -> ARM with no additional changes. Probably some workload that is CPU bound but does not depend on x86 architecturally would benefit from a number significantly higher than that 15%

What is the RAM and storage on each of these machines? Is it possible the Snapdragon has packaged RAM (with faster interconnects as a result), and the x86 machine is using DIMMs with longer traces? And what about storage? For that matter, what CPUs are you using?

Its possible ARM is a better architecture. But a lot of benchmarks end up stressing one part of the system more than any other. And if thats the case, faster RAM or faster syscalls or faster SSD performance or something could be whats really driving this performance difference.

  • Both systems have DDR5 soldered to the mainboard and NVMe SSDs (the Intel system has a faster Samsung model compared to the Foresee model in the Snapdragon system).

I mean we were suspecting for some time that smartphone processors have reached parity with laptop class ones. MacBook Neo proved it.

Not clear how both Amd and Intel not only lost the smartphone fight but also lost in their own field (aka servers, laptops, desktops)

15 years ago if I told you that windows would be running better on ARM you would call me crazy.

  • According to CPU bench, the Neo CPU is about the same speed as a mid range intel laptop CPU from 4 years ago.

    Apple A18 Pro (Q1 2026): Multithread 11977, Single Thread 4043

    Intel Core i5-1235U (Q1 2022): Multithread 12605, Single Thread 3084

    --

    On the high-end we got i9-13900KS at about 60k, M5 Max 18 scores about the same. But when you move on to server CPUs like Threadripper and EPYC things are about 3x faster.

    Lets see if the brand new Arm AGI changes this situation in a few months.

As former Windows person who still uses fair amount of Powershell on Linux, I was interested.

However, reading the summary left me confused like you don't understand what's happening at Microsoft.

> Hopefully Microsoft will spend more time in the future on their server product strategy and less on Copilot ;-)

The future product strategy is clear, it's Linux for servers. .Net runs on Linux, generally with much better performance. Microsoft internally on Azure is using Linux a ton and Windows Server is legacy and hell, MSSQL is legacy. Sure, they will continue to sell it because if you want to give them thousands of dollars, they would be idiots to turn it down but it's no longer a focus.

  • in no way that I can see is MSSQL or Server "legacy".

    • The only people using MSSQL Server are people deep, deep in the Microsoft ecosystem. Think government work, and those unlucky enough to work at a pure Microsoft shop where every problem looks like a Microsoft or Azure solution.

      It's not a dominant database anywhere on the outside.

      16 replies →

    • It's "legacy" because it's essentially tied to Windows. Yes, technically it works on Linux, and no doubt that was an amazing feat, but no serious company is running MSSQL on Linux when all the documentation, all the best practices are all based on running that on Windows.

      6 replies →

    • Even Microsoft considers Microsoft SQL Server legacy! It's had virtually no new features added between 2022 and 2025 other than AI and cloud integration. All the truly capable people have long since left that team and moved into various Azure and Fabric teams.

      To give you an idea of how bad things have gotten, there's like one guy working on developer tooling for SQL Server and he's "too busy" to implement SDK-style SQL Server Data Projects for Visual Studio. He's distracted by, you guessed it, support for Fabric's dialect of SQL for which the only tooling is Visual Studio Code (not VS 2026).

      There's people screaming at Microsoft that they have VS solutions with hundreds of .NET 10 and SQL projects, and now they can't open it their flagship IDE product because the SQL team office at Redmond has cloth draped over the furnite and the lights are all off except over one cubicle.

      Also: There still isn't support for Microsoft Azure v6 or v7 virtual machines in Microsoft SQL Server because they just don't have the staff to keep up with the low-level code changes required to support SSD over NVMe with 8 KB atomicity. Think about how insanely understaffed they must be if they're unable to implement 8 KB cluster support in a database engine that uses 8 KB pages!!!

      2 replies →

Reading the article, it seems to boil down to the following two observations:

1. ARM64 is actually less "smart" than x64. While Intel's Core i9 tries to be clever by aggressive boosting and throttling, Snapdragon just delivers steady and consistent performance. This lack of variability makes it easier for the OS to schedule tasks.

2. It is possible that the ARM build is more efficient than the x64 build, because Windows has less historical clutter on ARM than x64.

So, has CPU throttling become too smart to the point it hurts?

Typical approach on an HV server is to disable C States, set power management to high, etc preventing x86 from downclocking. Keeping the CPU from seesawing can have big improvements.

But you’re not going to do that in a lab/personal machine, usually.

Cant believe somebody is still using windows server? What’s the use case?

  • Building Unreal games. Running windows containers.

    Windows server is actually kind of awesome for when you need a Windows machine. Linux is great for servers but Windows server is the real Windows pro. Rock solid and none of the crap.

    The worst part of Windows server is knowing that Microsoft can make a good operating system and chooses not to.

    • Yes I only recently understood why people use Windows Server as a desktop operating system - it looks and feels like old Windows.

    • This has been the case for ever. I recall opting to use Windows Server 2003 over XP back in the day for desktop/workstation use.

      Could even enable XP themes IIRC.

  • Companies that are bigger than startups vibecoding food delivery apps?

    Even Apple and Google run AD internally.

    Gotta support all those CAD workstations running Windows.

    Is Apple hardware still designed on Windows PCs?

    • Im not really in the space but all the CAD things I see lately are browser based "cloud offerings"

      Im not sure is CAD stuff is just served by a basic graphics card at this point or if there is some server side work going on.

      OS doesnt mean that much when every industry decided that Chrome was going to be their VM

      3 replies →

  • Not a business use case, but I run it for my home server. I've got some QNAP JBOD SAS enclosures that only support firmware updates via Windows (or QNAP NAS). Every other disk enclosure I looked at involved some compromises (e.g. rackmounted, or non-SAS, or a custom-built thing that I'm not really interested in.)

    The next best alterative would be a Mac Studio with Thunderbolt enclosures, but that would be notably more expensive, and macOS isn't great as a server OS.

  • AEC companies.

    Our GIS clients run WS as a Deskstop OS with ESRIs ArcGIS Pro. Incredibly common.

    And once you have that - add in Active directory, DFS and random Windows Servers for running archaic proprietary licensing services.

  • An application that is only supported on MS Windows. Yes, those still exist. One project I am working on is supporting such an application that is a mix of desktop and web application talking to industrial monitoring devices.

    It's a beast in terms of complexity, in my opinion. But the vendor only supports running it on specific configurations.

  • Companies that aren't technology companies but use technology that has been doing the job for 20 years.

    • What was the reason 20 years ago?

      (I know, I know. That question might be a bit too loaded. I'm really very sorry. No, there's no need that; I'll see myself out.)

  • Many companies only have legacy software/server/services running on windows.

    • Yeah, I worked at a company with a Windows application dating from the early 1990s - I suspect it was a case of them needing to move off some ancient hardware and software and Linux was in its infancy and Unix was probably still quite expensive.

  • Questions like this show the incredible disconnect between HN and the widely deployed tech that the world depends on. The use case for Windows Server is running a centrally managed office: from operating your own certificate authority and deploying PC images, to managing resources like virtual desktops, print and file servers, all the way down to individual browser settings and even the ordering of items in the Start menu.

    You can recreate Windows Server on other platforms by stringing together bits and pieces, but there is nothing that comes even close in terms of integration and how everything works together. Nothing.

you upgraded a windows 2022 system to windows 2025, and you are comparing the upgraded machine, which will not have 2025 optimized defaults, including a lot of stuff that makes VMs work much better, to a new 2025 installation, right?