Discontinuation of ARM Notebook with Snapdragon X Elite SoC

7 days ago (tuxedocomputers.com)

It's a shame that this didn't end up going anywhere. When Qualcomm was doing their press stuff prior to the Snapdragon X launch, they said that they'd be putting equal effort into supporting both Windows and Linux. If anyone here is running Linux on a Snapdragon X laptop, I'd be curious to know what the experience is like today.

I will say that Intel has kind of made the original X Elite chips irrelevant with their Lunar Lake chips. They have similar performance/battery life, and run cool (so you can use the laptop on your lap or in bed without it overheating), but have full Linux support today and you don't have to deal with x86 emulation. If anyone needs a thin & light Linux laptop today, they're probably your best option. Personally, I get 10-14 hours of real usage (not manufacturer "offline video playback with the brightness turned all the way down" numbers) on my Vivobook S14 running Fedora KDE. In the future, it'll be interesting to see how Intel's upcoming Panther Lake chips compare to Snapdragon X2.

  • The iGPU in Panther Lake has me pretty excited about intel for the first time in a long time. Lunar Lake proved they’re still relevant; Panther Lake will show whether they can actually compete.

    • Lunar Lake had integrated RAM, right? Given certain market realities right now, it could be a real boon for them if they keep that design.

  • I'm typing this from a snapdragon x elite HP. It's fine really but my use is fairly basic. I only use it to watch movies, read, browse, and draft word and excel, some light coding.

    No gaming - and I came in knowing full well that a lot of the mainstream programs don't play well with snapdragon.

    What has amazed me the most is the battery life and the seemingly no real lag or micro-stuttering that you get in some other laptops.

    So, in all, fine for light use. For anything serious, use a desktop.

    • What is it about it that makes it unsuited for anything serious? The way you describe it, the only thing it's not suited for is gaming, which is not generally regarded as serious.

      Many people including myself do serious work on a macbook, which is also ARM. What's different about this qualcomm laptop that makes it inappropriate?

      20 replies →

  • I was incredibly excited when they announced the chip alongside all kinds of promises regarding Linux support, so I pre-ordered a laptop with the intention of installing Linux later on. When reports came out that single core performance could not even match an old iPhone, alongside WSL troubles and disappointing battery life, I sent it back on arrival.

    Instead I paid the premium for a nicely specced Macbook Pro, which is honestly everything I wanted, safe for Linux support. At least it's proper Unix, so I don't notice much difference in my terminal.

  • Forget equal effort: Start off with hardware docs.

    • Equal effort is far more likely from Qualcomm than hardware docs. They don't even freely share docs with partners, and many important things are restricted even from their own engineers. I've seen military contractors less paranoid than QCOM.

      2 replies →

    • Qualcomm could've become "the Intel of the ARM PC" if they wanted to, but I suspect they see no problem with (and perhaps have a vested interest in) proprietary closed systems given how they've been doing with their smartphone SoCs.

      Unfortunately, even Intel is moving in that direction whenever they're trying to be "legacy free", but I wonder if that's also because they're trying to emulate the success of smartphone SoC vendors.

      15 replies →

  • > I will say that Intel has kind of made the original X Elite chips irrelevant with their Lunar Lake chips.

    Depends why the Snapdragon chips were relevant in the first place! I got an ARM laptop for work so that I can locally build things for ARM that we want to be able to deploy to ARM servers.

    • Surprising. Cross compilation too annoying to set up? No CI pipelines for things you're actually deploying?

      (I'm keen about ARM and RISC-V systems, but I can never actually justify them given the spotty Linux situation and no actual use case)

      8 replies →

  • Do the Lunar Lake chips have the same incredible standby battery times as the Snapdragon X's? That's where the latter really shines in my opinion.

    • I have a couple generation back amd laptop that can 'standby' for months.. its called S4 hibernate. Although at the same time its set for S3 and can sit in S3 for a few days at least and recover in less time than it takes to open the screen. The idea that you need instant wakeup when the screen has been closed for days is sorta a niche case, even apple's machines hibernate if you leave the screen closed for too long.

      That isn't to say that modern standby/s2-idle isn't super useful, because it is, but more for actual use cases where the machine can basically go to sleep with the screen on displaying something the user is interacting with.

  • Yea, Lunar Lake made hit into ARM, but Panther Lake should be even stronger hit

    • Better efficiency of X86 mobiles CPUs does negate much of the advantage of ARM laptops. It's just not worth the trouble of going through a major software transition.

      One thing that I find suspicious is the large delta in single thread score between ARM and X86 currently. The real world performance does not suggest that big of a difference in actual use. The benchmarks suggest a 25% performance delta but in actual use the delta seems to be less than 10%. Of course Apple Silicon has the efficiency crown very much locked down

      Since they have become a marketing target the benchmarks have become much less useful.

I fully expected this. I really wanted to get the Snapdragon X Elite Ideacentre just because I wanted an ARM target to run stuff on and if I'm being honest the Mac Minis are way better price/performance with support. Apple Silicon is far faster than any other ARM processor (Ampere, Qualcomm, anything else) that's easily available with good Linux support.

I am so grateful to the Asahi Linux guys who made this whole thing work. What a tour de force! One day, we'll get the M4 Mac Mini on Asahi and that will be far superior to this Snapdragon X Elite anyway.

I remember working on a Qualcomm dev board over a decade ago and they had just the worst documentation. The hardware wouldn't even respond correctly to what you told it to do. I don't know if that's standard but without the large amount of desire there is to run Linux on Apple Silicon I didn't really anticipate support approaching what Asahi has on M1/M2.

  • A tour de force indeed. Asahi Linux only works as well as it does because of the massive effort put in by that team.

    For all the flack Qualcomm takes, they do significantly more than Apple to get hardware support into the kernel. They are already working to mainline the X2 Elite.

    The difference is that Apple only makes a few devices and there is a large community around them. It would be far less work to create a stellar Linux experience on a Lenovo X Elite laptop than on a M2 MacBook. But fewer people are lining up to do it on Lenovo. We expect Lenovo, Linaro, and Qualcomm to do it for us.

    Fair enough. But we should not be praising Apple.

  • Apple provide even less documentation than Qualcomm. Let that sink in.

    • Wrong documentation is perhaps worse than no documentation. Although Apple provides little, at least it is usually accurate, and what's left you know you must reverse engineer.

  • Unfortunately with the main reverse engineers of the Asahi project having moved on, I very much doubt we will see versions working on more recent M-series chips.

Qualcomm doesn't bother to upstream most of their SoCs. They maintain a fork of a specific Linux kernel version for a while and when they stop updating it or new version of Android requires newer kernel then updates for all devices based on that SoC end.

They have little experience producing code that is high enough quality it would be accepted into Linux kernel. They have even less experience maintaining it for an extended period of time.

While I almost certainly wouldn't have done more than wished for one, it's a shame they're not getting any return for their effort.

Does anyone know why Linux laptop battery life is so bad? Is it a case of devices needing to be turned off that aren't? Poor CPU scheduling?

  • It's ACPI - most laptops ship with half-broken ACPI tables, and provide support for tunables through windows drivers. It's convenient for laptop manufacturers, because Microsoft makes it very easy to update drivers via windows update, and small issues with sleep, performance, etc. can be mostly patched through a driver update.

    Linux OTOH can only use the information it has from ACPI to accomplish things like CPU power states, etc. So you end up with issues like "the fans stop working after my laptop wakes from sleep" because of a broken ACPI implementation.

    There are a couple of laptops with excellent battery life under linux though, and if you can find a lunar lake laptop with iGPU and IPS screen, you can idle around 3-4W and easily get 12+ hours of battery.

  • > Does anyone know why Linux laptop battery life is so bad?

    It's extremely dependent on the hardware and driver quality. On ARM and contemporary x86 that's even more true, because (among other things) laptops suspend individual devices ("suspend-to-idle" or "S0ix" or "Modern Standby"), and any one device failing to suspend properly has a disproportionate impact.

    That said, to a first approximation, this is a case where different people have wildly different experiences, and people who buy high-end well-supported hardware experience a completely different world than people who install Linux on whatever random hardware they have. For instance, Linux on a ThinkPad has excellent battery life, sometimes exceeding Windows.

  • Newer laptops come with extra power peripherals and sensors. Some of them are in ACPI tables, some are not. Most of them are proprietary ASICs (or custom chips, nuvoton produces quite a bit of those). Linux kernel or the userspace has poor support for those. Kernel PCIe drivers require some tuning. USB stack is kind of shaky and power management features are often turned off since they get unstable as hell.

    If you have a dGPU, Linux implementation of the power management or offloading actually consumes more power than Windows due to bad architectural design. Here is a talk from XDC2025 that plans to fix some of the issues: https://indico.freedesktop.org/event/10/contributions/425/

    Desktop usage is a third class citizen under Linux (servers first, embedded a distant second). Phones have good battery life since SoC and ODM engineers spend months to tune them and they have first party proprietary drivers. None of the laptop ODMs do such work to support Linux. Even their Windows tooling is arcane.

    Unless the users get drivers all the minute PMICs and sensors, you'll never get the battery life you can get from a clean Windows install with all the drivers. MS and especially OEMs shoot themselves in the foot by filling the base OS with so much bloat that Linux actually ends up looking better compared to stock OEM installs.

  • In addition to the other comments, its worth noting macOS started adding developer documentation around energy efficiency, quality of service prioritization, etc. (along with support within its OS) around 2015-2016 when the first fanless usb-c macbook came out: https://developer.apple.com/library/archive/documentation/Pe...

    Think I'm arguing its both things where the OS itself can optimize things for battery life along with instilling awareness and API support for it so developers can consider it too.

    • On top of this, they started encouraging adoption of multithreading and polished up the APIs to make doing so easier even in the early days of OS X, since they were selling PPC G4/G5 towers with dual and eventually quad CPUs.

      This meant that by the time they started pushing devs to pay attention to QoS and such, good Mac apps had already been thoroughly multithreaded for years, making it relatively easy to toss things onto lower priority queues.

  • My Dell XPS had pretty good battery life on linux. Probably better than on windows. But Dell sells the XPS wiht linux preinstalled. So I assume it has a lot to do with the drivers. Many notebooks have custom chips inside or some weird bios that works together with a windows program. I'd say laptops are more diverse than desktop PCs with of the shelve hardware.

    • Yeah, my 3-ish year old 13.4" XPS Plus is currently consuming 3.9 W with around 150 open tabs across four Firefox windows, 3 active Electron apps, Libreoffice Writer & Impress, a text editor, and a couple of terminals.

      That's in an extremely vanilla Debian stable install, running in the default "Balanced" power mode, without any power-related tuning or configuration.

      That compares reasonably well with my 14" M3 Macbook Pro, which seems to be drawing around 3.5 W with a similar set of apps open.

      Sure, the XPS is flattered in this comparison because it has a slightly smaller screen, but even accounting for that it would still be... fine? Easily enough to get through a full day of use, which is all I care about.

      There's nothing special about this XPS, and I'd expect the Thinkpad models that have explicit Linux support to be equally fine. The key point is that the vendor has put some amount of care and attention into producing a supportable system.

  • Install powertop, the "tunables" tab has a list of system power saving settings you can toggle through the UI. I've seen them make a pretty big difference, but YMMV of course.

    • It mostly just breaks things unfortunately. You can faff around for ages trying to figure out which devices work and which don’t but you end up with not much to show for it.

  • I ran into this problem on a Slimbook some years ago now. I found that my battery drained way too fast in standby, and I remember determining that this was some (relatively common) problem with sleep states, that some linux machines couldn't really enter/stay in a deeper sleep state, so my Slimbook's standby wasn't much of a standby at all.

    But that's just one problem, I bet.

  • A lot of people say that lightweight desktops/distros help. Probably GNOME/KDE unnecessarily use your SSD, network, GPU and other resources even when you are idle, compared to using a minimal WM and only starting the daemons you actually need.

    I personally never tested it, and I can't find definite benchmarks that confirm and measure the waste.

  • I've found that it can be made considerably better than Windows on the same hardware, but it requires substantial effort.

  • While each of the comments here describe individual failings, on a well supported laptop it is possible to get better power efficiency than windows if your willing to spend the time manually tuning linux, the powertop/etc suggestions are fine, but fundamentally the reason some of the 'lighter' DE's save so much power is that there is a lot of 'slop' in the default KDE/GNOME and application set. You have random things waking up to regularly and polling stuff which pulls the cores out of deep sleep states. And then there are all the kernel issues with being unable to identify and prioritize/schedule for a desktop. Ex: the only thing that should be given free reign is an active forground application, grouping and suppressing background applications, running them on little cores at slow rates if they have work to do/etc. All that is a huge part of why macos does so well vs linux on the same hardware.

    The comment about ACPI being the problem is slightly off base, since its a huge part of the solution to good power management on modern hardware. There isn't another specification that allows the kind of background fine grained power tuning of random busses/devices/etc by tiny management cores who's entire purpose is monitoring activity and making adjustments required of modern machines. If one goes the DT route as QC has done here, each machine needs a huge pile of custom mailbox interface drivers upstreamed into the kernel customized for every device and hardware update/change. They get away with this in the android space because each device is literally a customized OS and they don't have the upstream turnaround problem because they don't upstream any of it, but that won't scale for general purpose compute as the parent article talks about.

Somewhat tangent: x86-based laptops of this brand (it is new to me, I never meet Tuxedo Computers before) looks attractive, but there is no information about their screens main property: are they glossy or matt?

My wife is very sensitive to glossy screens and we have big problems to find new laptop for her, as most good ones are glossy now.

> We will continue to monitor developments and evaluate the X2E at the appropriate time for its Linux suitability. If it meets expectations and we can reuse a significant portion of our work on the X1E, we may resume development. How much of our groundwork can be transferred to the X2E can only be assessed after a detailed evaluation of the chip.

Apparently the Windows exclusivity period has ended, so Google will support Android and ChromeOS on Qualcomm X2-based devices in 2026, https://news.ycombinator.com/item?id=45368167

Hardware companies generally start working on a laptop before a SOC is released, not after. They also need to secure manufacturer support, in this case Qualcomm to be able to deliver in time.

  • HW companies generally have access to the prototype silicon. It's how they iron out bugs in the bringup HW.

I wonder if Mediatek will try its hand as laptop oriented SoC now that their flagship mobile SoC are competitive again and Google is merging Android and Chrome OS.

Generally, they are far nicer than Qualcomm when it comes to supporting standard technology.

  • They already have, and they are in Chromebooks. Last week, another HN:er posted that he uses a Lenovo Chromebook with a Mediatek SoC as his daily Linux dev machine.

    https://news.ycombinator.com/item?id=45938410

    BTW. I don't think Qualcomm SoCs running Windows was just about performance but more of a time-limited exclusivity deal with MS.

I wonder what made it so hard? I thought that Snapdragon was already providing the Linux drivers? Anyone knows? Maybe those were not OpenSource?

My guess is that it's all the same as in Linux phones that they have large blobs of drivers given by the board producer but not being open, but then... Maybe we should invest time in microkernels? Maybe Linux is a dead end because of the monolithic architecture? Because I doubt big companies will change...

Perhaps they should pursue building around Mediatek CPUs.

Google has already built Chromebooks (which are Linux based) on them, so presumably the necessary drivers exist.

Outside of laptops, NVidia sells its Jetson Devkits and DGX workstations which run Linux and are pretty fast and ARM based.

And System76 also sells a high powered (and $$$) Linux workstation based on an NVidia ARM chipset

So at least for some ARM SOCs, performance issues have largely been solved.

How hard can it be to have an Android laptop? Basically most people just use a browser and the choice of applications is already extensive.

  • What Android plus phones proves is you can get excellent performance and fantastic battery life from Linux and third party HW. This could and should be applied to Linux running on an ARM64 system but not sure why. Maybe economies of scales WRT investment on the phone driver side.

    • Except it isn't the same.

      First of all the userspace is completely different, secondly Android throughout the years has been aggressively changing the ways background process work (in the context of Android activities, not bare bones UNIX), thus it isn't the same as GNU/Linux where anything goes.

  • That is what all those Android tablets with detachable keyboards already are, plenty models to chose from.

  • There used to be some laptops like Toshiba ac100, actually an almost unusable device even for simple tasks.

This feels like BAU for PC vendors - you test out a product on a new combination of hardware, and it isn't mature/stable/ready for production, so you kick it down the road to develop later - this is especially true for Linux, where a LOT of the work would be done outside of your organisation.

I mean I feel like once one of the ARM chipmakers can lend a hand on the software side it should be a landslide.

Google and Samsung managed to make very successful Chromebooks together, but IIRC there was a bunch of back and forth to make the whole thing boot quickly and sip battery power.

  • What’s the primary need for ARM? Is it because Apple silicon showed a big breakthrough in performance to power with reduced instruction set? While it’s amazing on paper I barely notice a difference on my day to day use between an Intel Ultra and a M2 in performance. Battery life is where they are miles apart.

    • I’m guessing for most people it doesn’t much matter. Most people aren’t writing assembly. They do love an all day battery. I think the competition really helps keep these companies honest.

Bios is an issue for most laptop under Linux not just arm.

  • LVFS doesn't exist? UEFI?

    • I mean updating it. Often the update are just windows only..

      For example I've had this dell Elitebook where I've installed Debian wiping out Win. While on windows system prompts Bios update practically every week but been years in Linux on same bios. IIRC updates were win only or jump thru some complex rings of fire. Haven't bothered looking up in a while..

      I've also had to disable some protection such as security before I could install Debian though I guess there's a way if I research hard enough.

      9 replies →

  • Dell used to have a means to update BIOS via a small FreeDOS I believe. Not sure why something similar couldn't be done from U-Boot.

Was to be expected. Qualcomm sucks very much to support open platforms.

I was disappointed to see that no more good linux compatible XPS was available anymore because they are now based on the last snapdragon for bullshit windows "ai" reasons.

We can nerd it out about Linux this an S3 sleep that. How much money does the community need to raise all in, for that notebook to happen. Where's the GoFundMeAngelList platform that's a cult where I can pledge $10,000 to get this laptop of my dreams? Or are we all too busy shitposting?

  • > How much money does the community need to raise all in, for that notebook to happen. > Where's the GoFundMeAngelList platform that's a cult where I can pledge $10,000 to get this laptop of my dreams?

    The hard part isn't the money - it's identifying an addressable market that makes the investment worthwhile and assembling a team that can execute and deliver on it.

    The market can't be a few hundred enthusiasts who want to spend $10k on a laptop. It has to be at least tens of thousands who would spend $1-2k. Even that probably won't get you to the break-even when you consider the size (and speciality) of team you need to do all this.

ARM was always a distraction, and a monopoly i.e. worse than x86's duopoly.

Only RISC-V is worth switching to.

  • Besides the sibling comment, RISC-V isn't free from proprietary extensions as each OEM can add their own special juice.

  • monopoly? this is from DeepSeek, ymmv

    Here is a list of major ARM licensees, categorized by the type of license they typically hold. 1. Architectural Licensees (Most Flexible)

    These companies hold an Architectural License, which allows them to design their own CPU cores (and often GPUs/NPUs) that are compatible with the ARM instruction set. This is the highest level of partnership and requires significant engineering resources.

        Apple: The most famous example. They design the "A-series" and "M-series" chips (e.g., A17 Pro, M4) for iPhones, iPads, and Macs. Their cores are often industry-leading in single-core performance.
    
        Qualcomm: Historically used ARM's core designs but has increasingly moved to its own custom "Kryo" CPU cores (which are still ARM-compatible) for its Snapdragon processors. Their recent "Oryon" cores (in the Snapdragon X Elite) are a fully custom design for PCs.
    
        NVIDIA: Designs its own "Denver" and "Grace" CPU cores for its superchips focused on AI and data centers. They also hold a license for the full ARM architecture for their future roadmap.
    
        Samsung: Uses a mixed strategy. For its Exynos processors, some generations use semi-custom "M" series cores alongside ARM's stock cores.
    
        Amazon (Annapurna Labs): Designs the "Graviton" series of processors for its AWS cloud services, offering high performance and cost efficiency for cloud workloads.
    
        Google: Has developed its own custom ARM-based CPU cores, expected to power future Pixel devices and Google data centers.
    
        Microsoft: Reported to be designing its own ARM-based server and consumer chips, following the trend of major cloud providers.
    

    2. "Cores & IP" Licensees (The Common Path)

    These companies license pre-designed CPU cores, GPU designs, and other system IP from ARM. They then integrate these components into their own System-on-a-Chip (SoC) designs. This is the most common licensing model.

        MediaTek: A massive player in smartphones (especially mid-range and entry-level), smart TVs, and other consumer devices.
    
        Broadcom: Uses ARM cores in its networking chips, set-top box SoCs, and data center solutions.
    
        Texas Instruments (TI): Uses ARM cores extensively in its popular Sitara line of microprocessors for industrial and embedded applications.
    
        NXP Semiconductors: A leader in automotive, industrial, and IoT microcontrollers and processors, almost exclusively using ARM cores.
    
        STMicroelectronics (STM): A major force in microcontrollers (STM32 family) and automotive, heavily reliant on ARM Cortex-M and Cortex-A cores.
    
        Renesas: A key supplier in the automotive and industrial sectors, using ARM cores in its R-Car and RA microcontroller families.
    
        AMD: Uses ARM cores in some of its adaptive SoCs (Xilinx) and for security processors (e.g., the Platform Security Processor or PSP in Ryzen CPUs).
    
        Intel: While primarily an x86 company, its foundry business (IFS) is an ARM licensee to enable chip manufacturing for others, and it has used ARM cores in some products like the now-discontinued Intel XScale.

    • >monopoly? Here is a list of major ARM licensees...

      None of these companies is able to license cores to third parties.

      Only ARM can do that. ARM holds a monopoly.

      >this is from DeepSeek, ymmv

      DeepSeek would have told you this much, given the right prompt. Confirmation bias is unfortunately one hell of a bias.

      2 replies →