Dell admits consumers don't care about AI PCs

2 days ago (pcgamer.com)

I don't know how many others here have a CoPilot+ PC but the NPU on it is basically useless. There isn't any meaningful feature I get by having that NPU. They are far too limited to ever do any meaningful local LLM inference, image processing or generation. It handles stuff like video chat background blurring, but users' PC's have been doing that for years now without an NPU.

  • I'd love to see a thorough breakdown of what these local NPUs can really do. I've had friends ask me about this (as the resident computer expert) and I really have no idea. Everything I see advertised for (blurring, speech to text, etc...) are all things that I never felt like my non-NPU machine struggled with. Is there a single remotely killer application for local client NPUs?

    • The problem is essentially memory bandiwdth afiak. Simplifying a lot in my reply, but most NPUs (all?) do not have faster memory bandwidth than the GPU. They were originally designed when ML models were megabytes not gigabytes. They have a small amount of very fast SRAM (4MB I want to say?). LLM models _do not_ fit into 4MB of SRAM :).

      And LLM inference is heavily memory bandwidth bound (reading input tokens isn't though - so it _could_ be useful for this in theory, but usually on device prompts are very short).

      So if you are memory bandwidth bound anyway and the NPU doesn't provide any speedup on that front, it's going to be no faster. But has loads of other gotchas so no real "SDK" format for them.

      Note the idea isn't bad per se, it has real efficiencies when you do start getting compute bound (eg doing multiple parallel batches of inference at once), this is basically what TPUs do (but with far higher memory bandwidth).

      12 replies →

    • I used to work at Intel until recently. Pat Gelsinger (the prior CEO) had made one of the top goals for 2024 the marketing of the "AI PC".

      Every quarter he would have an all company meeting, and people would get to post questions on a site, and they would pick the top voted questions to answer.

      I posted mine: "We're well into the year, and I still don't know what an AI PC is and why anyone would want it instead of a CPU+GPU combo. What is an AI PC and why should I want it?" I then pointed out that if a tech guy like me, along with all the other Intel employees I spoke to, cannot answer the basic questions, why would anyone out there want one?

      It was one of the top voted questions and got asked. He answered factually, but it still wasn't clear why anyone would want one.

      5 replies →

    • In theory NPUs are a cheap, efficient alternative to the GPU for getting good speeds out of larger neural nets. In practice they're rarely used because for simple tasks like blurring, speech to text, noise cancellation, etc you can get usually do it on the CPU just fine. For power users doing really hefty stuff they usually have a GPU anyway so that gets used because it's typically much faster. That's exactly what happens with my AMD AI Max 395+ board. I thought maybe the GPU and NPU could work in parallel but memory limitations mean that's often slower than just using the GPU alone. I think I read that their intended use case for the NPU is background tasks when the GPU is already loaded but that seems like a very niche use case.

      1 reply →

    • > Everything I see advertised for (blurring, speech to text, etc...) are all things that I never felt like my non-NPU machine struggled with.

      I don’t know how good these neural engines are, but transistors are dead-cheap nowadays. That makes adding specialized hardware a valuable option, even if it doesn’t speed up things but ‘only’ decreases latency or power usage.

    • I think a lot of it is just power savings on those features, since the dedicated silicon can be a lot more energy efficient even if it's not much more powerful.

    • "WHAT IS MY PURPOSE?"

      "You multiply matrices of INT8s."

      "OH... MY... GOD"

      NPUs really just accelerate low-precision matmuls. A lot of them are based on systolic arrays, which are like a configurable pipeline through which data is "pumped" rather than a general purpose CPU or GPU with random memory access. So they're a bit like the "synergistic" processors in the Cell, in the respect that they accelerate some operations really quickly, provided you feed them the right way with the CPU and even then they don't have the oomph that a good GPU will get you.

      8 replies →

  • I have one as well and I simply don’t get it. I lucked into being able to do somewhat acceptable local LLM’ing by virtue of the Intel integrated “GPU” sharing VRAM and RAM, which I’m pretty sure wasn’t meant to be the awesome feature it turned out to be. Sure, it’s dead slow, but I can run mid size models and that’s pretty cool for an office-marketed HP convertible.

    (it’s still amazing to me that I can download a 15GB blob of bytes and then that blob of bytes can be made to answer questions and write prose)

    But the NPU, the thing actually marketed for doing local AI just sits there doing nothing.

  • I did some research on if the transistor budget for the NPU was spent on something else in the SoC/CPU, what could you get?

    You could have 4-10 additional CPU cores, or 30-100MB more L3 cache. I would definitely rather have more cores or cache, than a slightly more efficient background blurring engine.

  • Also the Copilot button/key is useless. It cannot be remapped to anything in Ubuntu because it sends a sequence of multiple keycodes instead if a single keycode for down and then up. You cannot remap it to a useful modifier or anything! What a waste of keyboard real estate.

    • If you want a small adventure, you could see which HID device those keystrokes show up on, and they might be remappable courtesy of showing up on a HID device for that specific button. Failing that, they most likely come from either ACPI AML code or from the embedded controller (EC). If the former, it’s not that hard to patch the AML code, and maybe Copilot could do it for you (you use standard open source tooling to disassemble the AML blob, which the kernel will happily give you, and then you make a patched version and load it). If the latter, you could see if anyone has made progress toward finding a less silly way to configure the EC.

      (The EC is a little microcontroller programmed by the OEM that does things like handling weird button presses.)

      There are also reports of people having decent results using keyd to remap the synthetic keystrokes from the copilot button.

      (The sheer number of times Microsoft has created totally different specs for how OEMs should implement different weird buttons is absurd.)

  • If I had to steelman Dell, they probably made a bet a while ago that the software side would have something for the NPU, and if so they wanted to have a device to cash in on it. The turnaround time for new hardware was probably on the order of years (I could be wrong about this).

    It turned out to be an incorrect gamble but maybe it wasn’t a crazy one to make at the time.

    There is also a chicken and egg problem of software being dependent on hardware, and hardware only being useful if there is software to take advantage of its features.

    That said I haven’t used Windows in 10 years so I don’t have a horse in this race.

    • > There is also a chicken and egg problem of software being dependent on hardware, and hardware only being useful if there is software to take advantage of its features.

      In the 90s, as a developer you couldn't depend on that a user's computer had a 3D accelerator (or 3D graphics) card. So 3D video games used multiple renderers (software rendering, hardware-accelerated rendering (sometimes with different backends like Glide, OpenGL, Direct3D)).

      Couldn't you simply write some "killer application" for local AI that everybody "wants", but which might be slow (even using a highly optimized CPU or GPU backend) if you don't have an NPU. Since it is a "killer application", very many people will still want to run it, even if the experience is slow.

      Then as a hardware vendor, you can make the big "show-off" how much better the experience is with an NPU (AI PC) - and people will immediately want one.

      Exactly the same story as for 3D accelerators and 3D graphics card where Quake and Quake II were such killer applications.

    • They are still including the NPU though, they just realised that consumers aren't making laptop purchases based on having "AI" or being branded with Copilot.

      The NPU will just become a mundane internal component that isn't marketed.

  • What we want as developers: To be able to implement functionality that utilizes a model for tasks like OCR, visual input and analysis, search or re-ranking etc, without having to implement an LLM API and pay for it. Instead we'd like to offer the functionality to users, possibly at no cost, and use their edge computing capacity to achieve it, by calling local protocols and models.

    What we want as users: To have advanced functionality without having to pay for a model or API and having to auth it with every app we're using. We also want to keep data on our devices.

    What trainers of small models want: A way for users to get their models on their devices, and potentially pay for advanced, specialized and highly performant on-device models, instead of APIs.

    • What seems to be delivered by NPUs at this point: filtering background noise from our microphone and blurring our camera using a watt or two less than before.

      2 replies →

  • The idea is that NPUs are more power efficient for convolutional neural network operations. I don't know whether they actually are more power efficent, but it'd be wrong to dismiss them just because they don't unlock new capabilties or perform well for very large models. For smaller ML applications like blurring backgrounds, object detection, or OCR, they could be beneficial for battery life.

    • Yes, the idea before the whole shove LLMs into everything era was that small, dedicated models for different tasks would be integrated into both the OS and applications.

      If you're using a recent phone with a camera, it's likely using ML models that may or may not be using AI accelerators/NPUs on the device itself. The small models are there, though.

      Same thing with translation, subtitles, etc. All small local models doing specialized tasks well.

      1 reply →

    • Not sure about all NPUs, but TPUs like Google's Coral accelerator are absolutely, massively more efficient per watt than a GPU, at least for things like image processing.

  • NPUs overall need better support from local AI frameworks. They're not "useless" for what they can do (low-precision bulk compute, which is potentially relevant for many of the newer models) and they could help address thermal limits due to their higher power efficiency compared to CPU/iGPU. but that all requires specialized support that hasn't been coming.

  • Yeah, that's because the original npus were a rush job, the amd AI Max is the only one that's worth anything in my opinion.

  • If you do use video chat background blurring, the NPU is more efficient at it vs using your cpu or gpu. So the feature it supports is longer battery life, and less resource usage on your main chips, and better performance for the things that NPUs can do. E.g higher video quality on your blurred background.

    • Really, the best we can do with the NPU is a less battery intensive blurred background? R&D money well spent I guess...

  • The stacks for consumer NPUs are absolutely cursed, this does not surprise me.

    They (Dell) promised a lot in their marketing, but we're like several years into the whole Copilot PC thing and you still can barely, if at all, use sane stacks with laptop NPUs.

  • NPUs were pushed by Microsoft, who saw the writing on the wall: AI like chatgpt will dominate the user's experience, edge computing is a huge advantage in that regard, and Apple's hardware can do it. NPUs are basically Microsoft trying to fudge their way to a llamacpp-on-Apple-Silicon experience. Obviously it failed, but they couldn't not try.

    • > NPUs were pushed by Microsoft, who saw the writing on the wall: AI like chatgpt will dominate the user's experience, edge computing is a huge advantage in that regard

      Then where is a demo application from Microsoft of a model that I can run locally where my user experience is so much better (faster?) if my computer has an NPU?

      1 reply →

    • I think the reason why NPUs failed is that Microsoft's preferred standard ONNX and the runtime they developed is a dud. Exporting models to work on ONNX is a pain in the ass.

    • > AI like chatgpt will dominate the user's experience

      I hope not. Sure they’re helpful, but I’d rather they sit idle behind the scenes, and then only get used when a specific need arises rather than something like a Holodeck audio interface

  • The NPU is essentially the Sony Cell "SPE" coprocessor writ large.

    The Cell SPE was extremely fast but had a weird memory architecture and a small amount of local memory, just like the NPU, which makes it more difficult for application programmers to work with.

  • The Copilot Runtime APIs to utilize the NPU are still experimental and mostly unavailable. I can't believe an entire generation of the Snapdragon X chip came and went without working APIs. Truly incredible.

  • If you do use video chat background blurring, the NPU is more efficient at it vs using your cpu or gpu. So the feature it supports is longer battery life and less resource usage on your main chips.

    • I'm not too familiar with the NPU, but this sounds a lot like GPU acceleration where a lot of the time you still end up having everything run on the CPU since it just works everywhere all the time rather than having to have both a CPU and an NPU version.

  • I've got one anecdote: friend needed Live Captions for a translating job and had to get copilot+ PC just for that.

    • What software are they using for that, and how did they know ahead of time that the software would use their NPU?

  • Question - from the perspective of the actual silicon, are these NPUs just another form of SIMD? If so, that's laughable sleight of hand and the circuits will be relegated to some mothball footnote in the same manner as AVX512, etc.

    To be fair, SIMD made a massive difference for early multimedia PCs for things like music playback, gaming, and composited UIs.

    • > circuits will be relegated to some mothball footnote in the same manner as AVX512

      AVX512 is widely used...

    • NPUs are a separate accelerator block, not in-CPU SIMD. The latter exists for matrix compute, but only in the latest version of AVX which has yet to reach consumer CPUs.

      1 reply →

> It's not that Dell doesn't care about AI or AI PCs anymore, it's just that over the past year or so it's come to realise that the consumer doesn't.

I wish every consumer product leader would figure this out.

  • People will want what LLMs can do they just don't want "AI". I think having it pervade products in a much more subtle way is the future though.

    For example, if you close a youtube browser tab with a comment half written it will pop up an `alert("You will lose your comment if you close this window")`. It does this if the comment is a 2 page essay or "asdfasdf". Ideally the alert would only happen if the comment seemed important but it would readily discard short or nonsensical input. That is really difficult to do in traditional software but is something an LLM could do with low effort. The end result is I only have to deal with that annoying popup when I really am glad it is there.

    That is a trivial example but you can imagine how a locally run LLM that was just part of the SDK/API developers could leverage would lead to better UI/UX. For now everyone is making the LLM the product, but once we start building products with an LLM as a background tool it will be great.

    It is actually a really weird time, my whole career we wanted to obfuscate implementation and present a clean UI to end users, we want them peaking behind the curtain as little as possible. Now everything is like "This is built with AI! This uses AI!".

    • > Ideally the alert would only happen if the comment seemed important but it would readily discard short or nonsensical input. That is really difficult to do in traditional software but is something an LLM could do with low effort.

      I read this post yesterday and this specific example kept coming back to me because something about it just didn't sit right. And I finally figured it out: Glancing at the alert box (or the browser-provided "do you want to navigate away from this page" modal) and considering the text that I had entered takes... less than 5 seconds.

      Sure, 5 seconds here and there adds up over the course of a day, but I really feel like this example is grasping at straws.

      16 replies →

    • > if you close a youtube browser tab with a comment half written it will pop up an `alert("You will lose your comment if you close this window")`. It does this if the comment is a 2 page essay or "asdfasdf". Ideally the alert would only happen if the comment seemed important but it would readily discard short or nonsensical input. That is really difficult to do in traditional software but is something an LLM could do with low effort.

      I don't think that's a great example, because you can evaluate the length of the content of a text box with a one-line "if" statement. You could even expand it to check for how long you've been writing, and cache the contents of the box with a couple more lines of code.

      An LLM, by contrast, requires a significant amount of disk space and processing power for this task, and it would be unpredictable and difficult to debug, even if we could define a threshold for "important"!

      6 replies →

    • > Ideally the alert would only happen if the comment seemed important but it would readily discard short or nonsensical input.

      That doesn't sound ideal at all. And in fact highlights what's wrong with AI product development nowadays.

      AI as a tool is wildly popular. Almost everyone in the world uses ChatGPT or knows someone who does. Here's the thing about tools - you use them in a predictable way and they give you a predictable result. I ask a question, I get an answer. The thing doesn't randomly interject when I'm doing other things and I asked it nothing. I swing a hammer, it drives a nail. The hammer doesn't decide that the thing it's swinging at is vaguely thumb-shaped and self-destruct.

      Too many product managers nowadays want AI to not just be a tool, they want it to be magic. But magic is distracting, and unpredictable, and frequently gets things wrong because it doesn't understand the human's intent. That's why people mostly find AI integrations confusing and aggravating, despite the popularity of AI-as-a-tool.

      25 replies →

    • >For example, if you close a youtube browser tab with a comment half written it will pop up an `alert("You will lose your comment if you close this window")`. It does this if the comment is a 2 page essay or "asdfasdf". Ideally the alert would only happen if the comment seemed important but it would readily discard short or nonsensical input. That is really difficult to do in traditional software but is something an LLM could do with low effort. The end result is I only have to deal with that annoying popup when I really am glad it is there.

      The funny thing is that this exact example could also be used by AI skeptics. It's forcing an LLM into a product with questionable utility, causing it to cost more to develop, be more resource intensive to run, and behave in a manner that isn't consistent or reliable. Meanwhile, if there was an incentive to tweak that alert based off likelihood of its usefulness, there could have always just been a check on the length of the text. Suggesting this should be done with an LLM as your specific example is evidence that LLMs are solutions looking for problems.

      6 replies →

    • > Ideally the alert would only happen if the comment seemed important but it would readily discard short or nonsensical input

      No, ideally I would be able to predict and understand how my UI behaves, and train muscle memory.

      If closing a tab would mean losing valuable data, the ideal UI would allow me to undo it, not try to guess if I cared.

      1 reply →

    • You know what that reminds me very much of? That email client thing that asks you "did you forget to add an attachment?". That's been there for 3 decades (if not longer) before LLMs were a thing, so I'll pass on it and keep waiting for that truly amazing LLM-enabled capability that we couldn't dream of before. Any minute, now.

    • > readily discard short or nonsensical input

      When "asdfasdf" is actually a package name, and it's in reply to a request for an NPM package, and the question is formulated in a way that makes it hard for LLMs to make that connection, you will get a false positive.

      I imagine this will happen more than not.

    • Using such an expensive technology to prevent someone from making a stupid mistake on a meaningless endeavor seems like a complete waste of time. Users should just be allowed to fail.

      6 replies →

    • So, like, machine learning. Remember when people used to call it AI/ML? Definitely wasn't as much money being spent on it back then.

    • > The end result is I only have to deal with that annoying popup when I really am glad it is there.

      Are you sure about that? It will trigger only for what the LLM declares important, not what you care about.

      Is anyone delivering local LLMs that can actually be trained on your data? Or just pre made models for the lowest common denominator?

    • > For example, if you close a youtube browser tab with a comment half written it will pop up an `alert("You will lose your comment if you close this window")`. It does this if the comment is a 2 page essay or "asdfasdf". Ideally the alert would only happen if the comment seemed important but it would readily discard short or nonsensical input. That is really difficult to do in traditional software but is something an LLM could do with low effort.

      I agree this would be a great use of LLMs! However, it would have to be really low latency, like on the order of milliseconds. I don't think the tech is there yet, although maybe it will be soon-ish.

    • It’s because “AI” isn’t a feature. “AI” without context is meaningless.

      Google isn’t running ads on TV for Google Docs touting that it uses conflict-free replicated data types, or whatever, because (almost entirely) no one cares. Most people care the same amount about “AI” too.

    • I want AI to do useful stuff. Like comb through eBay auctions or Cars.com. Find the exact thing I want. Look at things in photos, descriptions, etc

      I don't think an NPU has that capability.

    • Would that be ideal though? Adding enormous complexity to solve a trivial problem which would work I'm sure 99.999% of the time, but not 100% of the time.

      Ideally, in my view, is that the browser asks you if you are sure regardless of content.

      I use LLMs, but that browser "are you sure" type of integration is adding a massive amount of work to do something that ultimately isn't useful in any real way.

    • You don't need a LLM for that, a simple Markov Chain can solve that with a much smaller footprint.

    • At my current work much of our software stack is based on GOFAI techniques. Except no one calls them AI anymore, they call it a "rules engine". Rules engines, like LLMs, used to be sold standalone and promoted as miracle workers in and of themselves. We called them "expert systems" then; this term has largely faded from use.

      This AI summer is really kind of a replay of the last AI summer. In a recent story about expert systems seen here on Hackernews, there was even a description of Gary Kildall from The Computer Chronicles expressing skepticism about AI that parallels modern-day AI skepticism. LLMs and CNNs will, as you describe, settle into certain applications where they'll be profoundly useful, become embedded in other software as techniques rather than an application in and of themselves... and then we won't call them AI. Winter is coming.

      1 reply →

    • Hopefully, you could make a browser extension to detect if a HTML form has unsaved changes; it should not require AI and LLM. (This will work better without the document including JavaScripts, but it is possible to work with JavaScripts too.)

    • No. No-no-no-no-no. I want predictability. I don't want a black box with no tuning handles and no awareness of the context to randomly change the behavior of my environment.

      1 reply →

    • Bingo. Nobody uses ChatGPT because it's AI. They use it because it does their homework, or it helps them write emails, or whatever else. The story can't just be "AI PC." It has to be "hey look, it's ChatGPT but you don't have to pay a subscription fee."

    • I want a functioning search engine. Keep your goofy opinionated mostly wrong LLM out of my way, please.

    • Honestly some of the recommendations to watch next I get on Netflix are pretty good.

      No idea if they are AI Netflix doesn't tell and I don't ask.

      AI is just a toxic brand at this point IMO.

      1 reply →

  • I think they will eventually. It’s always been a very incoherent sales pitch that your expensive PCs are packed full of expensive hardware that’s supposed to do AI things, but your cheap PCs that have none of that are still capable of doing 100% of the AI tasks that customers actually care about: accessing chatGPT.

    • Also, what kind of AI tasks is the average person doing? The people thinking about this stuff are detached from reality. For most people a computer is a gateway to talking to friends and family, sharing pictures, browsing social media, and looking up recipes and how-to guides. Maybe they do some tracking of things as well in something like Excel or Google Sheets.

      Consumer AI has never really made any sense. It's going to end up in the same category of things as 3D TV's, smart appliances, etc.

      11 replies →

  • Dell are less beholden to shareholder pressure than others, Michael Dell owns 50% of the company since it went public again.

  • Companies don’t really exist to make products for consumers, they live to create stock value for investors. And the stock market loves AI

    • The stock market as always been about whatever is the fad in the short term, and whatever produces value in the long term. Today AI is the fad, but investors who care about fundamentals have always cared about pleasing customers because that is where the real value has always come from. (though be careful - not all customers are worth having, some wannabe customers should not be pleased)

    • The will of the stock market doesn't influence Dell, they're a privately held corporation. They're no longer listed on any public stock market.

    • As someone pointed out, Dell is 50% owned by Michael Dell. So it's less influenced by this paradigm.

  • I think part of the issue is that it's hard to be "exciting" in a lot of spaces, like desktop computers.

    People have more or less converged on what they want on a desktop computers in the last ~30 years. I'm not saying that there isn't room for improvement, but I am saying that I think we're largely at the state of "boring", and improvements are generally going to be more incremental. The problem is that "slightly better than last year" really isn't a super sexy thing to tell your shareholders. Since the US economy has basically become a giant ponzi scheme based more on vibes than actual solid business, everything sort of depends on everything being super sexy and revolutionary and disruptive at all times.

    As such, there are going to be many attempts from companies to "revolutionize" the boring thing that they're selling. This isn't inherently "bad", we do need to inject entropy into things or we wouldn't make progress, but a lazy and/or uninspired executive can try and "revolutionize" their product by hopping on the next tech bandwagon.

    We saw this nine years ago with "Long Blockchain Ice Tea" [1], and probably way farther back all the way to antiquity.

    [1] https://en.wikipedia.org/wiki/Long_Blockchain_Corp.

  • There is place for it but it is insanely overrated. AI overlords are trying to sell incremental (if in places pretty big) improvement in tools as revolution.

I did use whisper last night to get the captions out of a video file. The standard whisper tool from OpenAI uses CPU. It took more than 20 minutes to fully process a video file that was a little more than an hour long. During that time my 20-Core CPU was pegged at 100% utilization and the fan got very loud. I then downloaded an Intel version that used the NPU. CPUs stayed close to 0% and fans remained quiet. Total task was completed in about 6 minutes.

NPUs can be useful for some cases. The AI PC crap is ill thought out however.

  • I suggest trying whisper-cpp if you haven't. It's probably the fastest CPU only version.

    But yeah, NPUs likely will be faster.

    • Depending on the part, it's likely the iGPU will be even faster. The new panther lake has iGPUs with either 80% or 250% the performance of the NPU when at the higher end. But on lower end models, it's lower but still within the same performance class

  • If you mean OpenVINO, it uses CPU+GPU+NPU - not just the NPU. On something like a 265K the NPU would only be providing 13 of the 36 total TOPS. Overall, I wish they would just put a few more general compute units in the GPU and have 30 TOPS or something but more overall performance in general.

They nailed it. Consumers don't care about AI, they care about functionality they can use, and care less if it uses AI or not. It's on the OS and apps to figure out the AI part. This is why even though people think Apple is far behind in AI, they are doing it at their own pace. The immediate hardware sales for them did not get impacted by lack of flashy AI announcements. They will slowly get there but they have time. The current froth is all about AI infrastructure not consumer devices.

  • The only thing Apple is behind on in the AI race is LLMs.

    They've been vastly ahead of everyone else with things like text OCR, image element recognition / extraction, microphone noise suppression, etc.

    iPhones have had these features 2-5 years before Android did.

    • Apple’s AI powered image editor (like removing something from the background) is near unusable. Samsung’s is near magic, Google’s seems great. So there’s a big gap here.

      3 replies →

    • TTS is absolutely horrible on iOS. I have nearly driven into a wall when trying to use it whilst driving and it goofs up what I've said terribly. For the love of all things holy, will someone at Apple finally fix text to speech? It feels like they last touched it in 2016. My phone can run offline LLMs and generate images but it can't understand my words.

      7 replies →

  • > did not get impacted by lack of flashy AI announcements

    To be fair, they did announce flashy AI features. They just didn't deliver them after people bought the products.

    I've been reading about possible class action lawsuits and even the government intervening for false advertisement.

  • All of the reporting about Apple being behind on AI is driving me insane and I hope that what Dell is doing is finally going to be the reversal of this pattern.

    The only thing that Apple is really behind on is shoving the word (word?) "AI" in your face at every moment when ML has been silently running in many parts of their platforms well before ChatGPT.

    Sure we can argue about Siri all day long and some of that is warranted but even the more advanced voice assistants are still largely used for the basics.

    I am just hoping that this bubble pops or the marketing turns around before Apple feels "forced" to do a copilot or recall like disaster.

    LLM tech isn't going away and it shouldn't, it has its valid use cases. But we will be much better when it finally goes back into the background like ML always was.

    • Right! Also I don’t think Siri is that important to the overall user experience on the ecosystem. Sure it’s one of the most visible use cases but how many people really care about that? I don’t want to talk out loud to do tasks usually, it’s helpful in some specific scenarios but not the primary use case. The text counterpart of understanding user context on the phone is more important even in the context of llms, and that what plays into the success of their stack going forward

      5 replies →

  • Even customers who care about AI (or perhaps should...) have other concerns. With the RAM shortage coming up many customers may choose to do without AI features to save money even though they want it at a lower price.

As someone who spent a year writing an SDK specifically for AI PCs, it always felt like a solution in search of a problem. Like watching dancers in bunny suits sell CPUs, if the consumer doesn't know the pain point you're fixing, they won't buy your product.

  • Tbh it's been the same in Windows PCs since forever. Like MMX in the Pentium 1 days - was marketed as basically essential for anything "multimedia" but provided somewhat between no and minimal speedup (v little software was compiled for it).

    It's quite similar with Apple's neural engine, which afiak is used very little for LLMs, even for coreML. I know I don't think I ever saw it being used in asitop. And I'm sure whatever was using it (facial recognition?) could have easily ran on GPU with no real efficiency loss.

    • I have to disagree with you about MMX. It's possible a lot of software didn't target it explicitly but on Windows MMX was very widely used as it was integrated into DirectX, ffmpeg, GDI, the initial MP3 libraries (l3codeca which was used by Winamp and other popular MP3 players) and the popular DIVX video codec.

      3 replies →

    • Using VisionOCR stuff on MacOS spins my M4 ANE up from 0 to 1W according to poweranalyzer

    • Apple's neural engine is used a lot by the non-LLM ML tasks all over the system like facial recognition in photos and the like. The point of it isn't to be some beefy AI co-processor but to be a low-power accelerator for background ML workloads.

      The same workloads could use the GPU but it's more general purpose and thus uses more power for the same task. The same reason macOS uses hardware acceleration for video codecs and even JPEG, the work could be done on the CPU but cost more in terms of power. Using hardware acceleration helps with the 10+ hour lifetime on the battery.

      5 replies →

    • The silicon is sitting idle in the case of most laptop NPUs. In my experience, embedded NPUs are very efficient, so there's theoretically real gains to be made if the cores were actually used.

      2 replies →

  • It's even worse and sadder. Consumers already paid a premium for that, because the monopolists in place made it unavoidable. And now, years later, engineers (who usually are your best advocates and evangelists when it comes to bringing new technologies to the material world) are desperate to find any reason at all for those things to exist and not be a complete waste of money and resources.

  • I spent a few months working on different edge compute NPUs (ARM mostly) with CNN models and it was really painful. A lot of impressive hardware, but I was always running into software fallbacks for models, custom half-baked NN formats, random caveats, and bad quantization.

    In the end it was faster, cheaper, and more reliable to buy a fat server running our models and pay the bandwidth tax.

Fundamentally when you think about it, what people know today as AI are things like ChatGPT and all of those products run on cloud infrastructure mainly via the browser or an app. So it makes perfect sense that customers just get confused when you say "This is an AI PC". Like, what a weird thing to say - my smartphone can do ChatGPT why would I buy a PC to do that. It's just a totally confusing selling point. So you ask the question why is it an AI PC and then you have to talk about NPUs, which apart from anything else are confusing (Neural what?) but bring you back to this conversation:

What is an NPU? Oh it's a special bit of hardware to do AI. Oh ok, does it run ChatGPT? Well no, that still happens in the cloud. Ok, so why would I buy this?

Consumers are not idiots. We know all this AI PC crap is it's mostly a useless gimmick.

One day it will be very cool to run something like ChatGPT, Claude, or Gemini locally in our phones but we're still very, very far away from that.

  • It’s today’s 3D TVs. It’s something investors got all hyped up about that everybody “has to have“.

    There is useful functionality there. Apple has had it for years, so have others. But at the time they weren’t calling it “AI“ because that wasn’t the cool word.

    I also think most people associate AI with ChatGPT or other conversational things. And I’m not entirely sure I want that on my computer.

    But some of the things Apple and others have done that aren’t conversational are very useful. Pervasive OCR on Windows and Mac is fantastic, for example. You could brand that as AI. But you don’t really need to no one cares if you do or not.

    • > Pervasive OCR on Windows and Mac is fantastic, for example.

      I agree. Definitely useful features but still a far cry from LLMs which is what the average consumer identifies as AI.

  • Not that far away, you can run a useful model on flagship phones today, something around GPT 3.5's level.

    So we're probably only a few years out from today's SOTA models on our phones.

I think the moral of the story is just don't buy any electronics until you absolutely have to now: your laptop, your desktop, your car, your phone, your tv's. Go third party for maintenance when you can. Install Linux when you can. Only buy things that can be maintained and enjoy what you have.

  • I got a new Subaru and the touchscreen is making me insane. I will avoid electronics in cars as much as possible going forward.

    It literally has a warning that displays every time you start the car: "Watching this screen and making selections while driving can lead to serious accidents". Then you have to press agree before you can use the A/C or stereo.

    Like oh attempting to turn the air conditioner on in your car can lead to serious accidents? Maybe you should rethink your dashboard instead of pasting a warning absolving you of its negative effects?

Finally companies understand that consumers do not want AI products, but just better, stronger, and cheaper products.

Unfortunately investors are not ready to hear that yet...

  • If the AI-based product is suitable for purpose (whatever "for purpose" may mean), then it doesn't need to be marketed first and foremost as "AI". This strikes me as pandering more to investors than consumers, and even signaling that you don't value the consumers you sell to, or that you regard the company's stock as more of the product than the actual product.

    I can see a trend of companies continuing to use AI, but instead portraying it to consumers as "advanced search", "nondeterministic analysis", "context-aware completion", etc - the things you'd actually find useful that AI does very well.

    • It's basically being used as "see, we keep up with the times" label, as there is plenty of propaganda that basically goes "move entirely to using AI for everything or you're obsolete"

  • The problem is that there are virtually no off-the-shelf local AI applications. So they're trying to sell us expensive hardware with no software that takes advantage of it.

    • Yes it's a surprising marketing angle. What are they expecting people to run on these machines? Do they expect your average joe to pop into the terminal and boot up ollama?

      Anyone technical enough to jump into local AI usage can probably see through the hardware fluff, and will just get whatever laptop has the right amount of VRAM.

      They are just hoping to catch the trend chasers out, selling them hardware they won't use, confusing it as a requirement for using ChatGPT in the browser.

      2 replies →

  • I agree with you, and I don't want anything related to the current AI craze in my life, at all.

    But when I come on HN and see people posting about AI IDEs and vibe coding and everything, I'm led to believe that there are developers that like this sort of thing.

    I cannot explain this.

    • I see using AI for coding as a little different. I'm producing something that is designed for a machine to consume and react to. Code is the means by which I express my aims to the machine. With AI there's an extra layer of machine that transforms my written aims into a language any machine can understand. I'm still ambivalent about it, I'm proud of my code. I like to know it inside out. Surrendering all that feels alien to me. But it's also undeniable that AI has sped up a bunch of the boring grunt work I have to do in projects. You can write, say, an OpenAPI spec, some tests and tell the AI to do the rest. It's very, very far from perfect but it remains very useful.

      But the fact remains that I'm producing something for a machine to consume. When I see people using AI to e.g. write e-mails for them that's where I object: that's communication intended for humans. When you fob that off onto a machine something important is lost.

      4 replies →

    • Partly it's these people all trying to make money selling AI tools to each other, and partly there's a lot of people who want to take shortcuts to learning and productivity without thinking or caring about long term consequences, and AI offers that.

    • The "AI" gold rush pays a lot. So they're trying to present themselves as "AI" experts so they can demand those "AI" gold rush salaries.

    • Even as a principal software developer and someone who is skeptical and exhausted with the AI hype, AI IDEs can be useful. The rule I give to my coworkers is: use it where you know what to write but want to save time doing it. Unit tests are great for this. Quick demos and test benches are great. Boilerplate and glue are great for this. There are lots of places where trivial, mind-numbing work can be done quickly and effortlessly with an AI. These are cases where it's actually making life better for the developer, not replacing their expertise.

      I've also had luck with it helping with debugging. It has the knowledge of the entire Internet and it can quickly add tracing and run debugging. It has helped me find some nasty interactions that I had no idea were a thing.

      AI certainly has some advantages in certain use cases, that's why we have been using AI/ML for decades. The latest wave of models bring even more possibilities. But of course, it also brings a lot of potential for abuse and a lot of hype. I, too, all quite sick of it all and can't wait for the bubble to burst so we can get back to building effective tools instead of making wild claims for investors.

      1 reply →

    • > I'm led to believe that there are developers that like this sort of thing.

      this is their aim, along with rabbiting on about "inevitability"

      once you drop out of the SF/tech-oligarch bubble the advocacy drops off

Well, yes, Dell, everyone knows that, but it is _most_ improper to actually _say_ it. What would the basilisk think?!

  • Yes, everybody should buy an AI PC. Buy two! For all we know, that's exactly what we need for AGI... why would you be against that?

  • Why would the basilisk care about people spending money on what is clearly a dead end?

Protip, if you are considering a dell xps laptop, consider the dell precision laptop workstation instead which is the business version of the consumer level xps.

It also looks like names are being changed, and the business laptops are going with a dell pro (essential/premium/plus/max) naming convention.

  • I have the precision 5690 (the 16inch model) with a ultra 7 processor and 4k touchscreen (2025 model). It is very heavy, but its very powerful. My main gripe is that the battery life is very bad, and it has a 165 watt charger, which wont work on most planes. So if you fly a lot for work, this laptop will die on you unless you bring a lower wattage charger. It also doesn't sleep properly. I often find it in my bag hours after closing it and the fans are going at full blast. It should have a 4th usb port (like the smaller version!). Otherwise I have no complaints (other than about windows 11!).

    • After using several Precisions at work, I now firmly believe that Dell does not know how to cool their workstations properly. They are all heavy, pretty bad at energy efficiency and run extremely hot (I use my work machine laid belly up in summer since fans are always on). I’d take a ThinkPad or Mac any day over any Dell.

      1 reply →

  • I just want a solid laptop that can be used with the lid closed. I want to set it up and never open the lid again. I'll guess I'll keep dreaming.

    • Yeah they should make a laptop where you can choose what display you want to use, and which keyboard and mouse for that matter. It could be made cheaper by ditching the screen and keyboard, and heck I wouldn’t even mind if it were a bit bigger or heavier since it’ll just sit on or under my desk. That sort of laptop would be amazing.

      1 reply →

Dell is cooked this year for reasons entirely outside their control. DRAM and storage/drive shortages are causing costs of those to go to the moon. And Dell's 'inventory' light supply chain and narrow margins puts them in a perfect storm of trouble.

  • I can't wait for all the data center fire-sales when the whole "AI" boom goes bust. Ebay is going to be flooded with tech.

    • > I can't wait for all the data center fire-sales when the whole "AI" boom goes bust. Ebay is going to be flooded with tech.

      I think a lot of the hardware of these "AI" servers will rather get re-purposes for more "ordinary" cloud applications. So I don't think your scenario will happen.

  • Anything but admitting that AI king is naked, here on HN...

    • What? No, this is a pretty relevant comment that is being directly caused by AI.

      Consumer PCs and hardware are going to be expensive in 2026 and AI is primarily to blame. You can find examples of CEOs talking about buying up hardware for AI without having a datacenter to run it in. This run on hardware will ultimately drive hardware prices up everywhere.

      The knock on effect is that hardware manufacturers are likely going to spend less money doing R&D for consumer level hardware. Why make a CPU for a laptop when you can spend the same research dollars making a 700 core beast for AI workloads in a datacenter? And you can get a nice premium for that product because every AI company is fighting to get any hardware right now.

      2 replies →

  • So it was RAM a couple months ago and now storage/drives are going to the moon also?

    • It was RAM a couple months ago, and it continues to be RAM. Major RAM manufacturers like SK Hynix are dismantling NAND production to increase RAM manufacturing, which is leading to sharp price increases for solid-state storage.

Why would "consumers" as a whole care about an AI specific pc?

Consumers consciously choosing to play games - or serious CAD/image/video editing - usually note they will want a better GPU.

Consumers consciously choosing to use AI/llm? That's a subscription to the main players.

I personally would like to run local llm. But this is far from a mainstream view and what counts as an AI PC now isn't going to cut it.

> What we've learned over the course of this year, especially from a consumer perspective, is they're not buying based on AI .. In fact I think AI probably confuses them more than it helps them understand a specific outcome.

Do consumers understand that OEM device price increases are due to AI-induced memory price spike over 100%?

On the same note, whats going on with Dell's marketing lately?

Dell, Dell Pro, Dell Premium, Dell _Pro_ Premium Dell Max, Dell _Pro_ max... They went and added capacitive keys on the XPS? Why would you do this...

A lot of decisions that do not make sense to me.

  • I thought they actually dumbed down the model names. Basically the more adjactives the laptop has, the higher the model is. Now the machines can have pronounciable names and just add generation number every year or so.

    Sure, the original numbering system did make sense, but you had to Google what the system meant. Now, it's kind of intuitive, even though the it's just a different permutation of the same words?

  • The new XPS's that they just teased at CES bring back the real function keys and have a newly designed aluminum unibody.

    I've shied away from Dell for a bit because I had two XPS 15's that had swelling batteries. But the new machines look pretty sweet!

Something I learned on HN years ago was the principle that often something that is riding to the top of the hyper curve is usually not a good product, but a good feature in another product.

At CES this year, one of the things that was noted was that "AI" was not being pushed so much as the product, but "things with AI" or "things powered by AI".

This change in messaging seems to be aligning with other macro movements around AI in the public zeitgeist (as AI continues to later phases of the hyper curve) that the companies' who've gone all-in on AI are struggling to adapt to.

The end-state is to be seen, but it's clear that the present technology around AI has utility, but doesn't seem to have enough utility to lift off the hype curve on an continuously upward slope.

Dell is figuring this out, Microsoft is seeing it in their own metrics, Apple and AWS has more or less dipped toes in the pool...I'd wager that we'll see some wild things in the next few years as these big bets unravel into more prosaic approaches that are more realistically aligned with the utility AI is actually providing.

I'm not a game programmer, but is there a use case for NPUs in gaming? One idea: If you had some kind of open world game, like a modern role playing game, where the NPCs could have non-deterministic conversations (1990s-style: "talk with the villagers") that could be pretty cool. Are NPUs are a good fit for this use case?

Does anyone know: How do these vendors (like Dell) think normie retail buyers would use their NPUs?

They still ship their laptops with the Copilot key. Once that is removed then their statement will follow their actions.

  • I'd be surprised if Microsoft would sell them Windows licenses or would work with them on drivers if they don't put the Copilot key on the keyboard.

They’ve just realised that AI won’t be in the PC, but on a server. Where Dell are heavily selling into - “AI datacenter” counted for about 40% of there infrastructure revenue

Everyone just wants a laptop with the latest NVIDIA graphics card, but also good cooling and a slim design. That's all. People don't care what AI features are built in; that's for Windows and applications.

  • Consumers will prioritize products with the latest hardware, good performance, and low price.

    • In today's economic environment, cost-effectiveness is a primary consideration for consumers.

NPU is space that would've probably been better put into something like a low power programmable DSP core, which they more or less are depending on which one you are looking at but with some preconceived ideas on how to feed the DSP its data and get the hardware working. You don't get to simply write programs on them usually from what I've seen.

"We're very focused on delivering upon the AI capabilities of a device—in fact everything that we're announcing has an NPU in it—but what we've learned over the course of this year, especially from a consumer perspective, is they're not buying based on AI," Terwilliger says bluntly. "In fact I think AI probably confuses them more than it helps them understand a specific outcome."

--------------

What we're seeing here is that "AI" lacks appeal as a marketing buzzword. This probably shouldn't be surprising. It's a term that's been in the public consciousness for a very long time thanks to fiction, but more frequently with negative connotations. To most, AI is Skynet, not the thing that helps you write a cover letter.

If a buzzword carries no weight, then drop it. People don't care if a computer has a NPU for AI any more than they care if a microwave has a low-loss waveguide. They just care that it will do the things they want it to do. For typical users, AI is just another algorithm under the hood and out of mind.

What Dell is doing is focusing on what their computers can do for people rather than the latest "under the hood" thing that lets them do it. This is probably going to work out well for them.

  • > People don't care if a computer has a NPU

    I actually do care, on a narrow point. I have no use for an NPU and if I see that a machine includes one, I immediately think that machine is overpriced for my needs.

    • Alas NPUs are in essentially all modern CPUs by Intel and AMD. It’s not a separate bit of silicon, it’s on the same package as the CPU

      2 replies →

Unfortunately, their common sense has been rewarded by the stock tanking 15% in the past month including 4% just today alone. Dell shows why companies don't dare talk poorly of AI, or even talk about AI in a negative way at all. It doesn't matter that it's correct, investors hate this and that's what a ton of companies are mainly focusing on.

  • Should have stayed private. Then they wouldn’t have to care what investors think.

    • The whole point of going private is to make the private equity partners a boatload of money by going public again in the future.

  • To be fair, Dell has bigger, more fundamental threats out on the horizon right now than consumers not wanting AI.

    Making consumers want things is fixable in any number of ways.

    Tariffs?..

    Supply chain issues in a fracturing global order?..

    .. not so much. Only a couple ways to fix those things, and they all involve nontrivial investments.

    Even longer term threats are starting to look more plausible these days.

    Lot of unpredictability out there at the moment.

Most consumers aren't running LLMs locally. Most people's on-device AI is likely whatever Windows 11 is doing, and Windows 11 AI functionality is going over like a lead balloon. The only open-weight models that can come close to major frontier models require hundreds of gigabytes of high bandwidth RAM/VRAM. Still, your average PC buyer isn't interested in running their own local LLM. The AMD AI Max and Apple M chips are good for that audience. Consumer dedicated GPUs just don't have enough VRAM to load most modern open-weight LLMs.

I remember when LLMs were taking off, and open-weight were nipping at the heels of frontier models, people would say there's no moat. The new moat is high bandwidth RAM as we can see from the recent RAM pricing madness.

  • > your average PC buyer isn't interested in running their own local LLM.

    This does not fit my observation. It's rather that running one's local LLM is currently far too complicated for the average PC user.

I'll never forget walking through a tech store and seeing a HP printer that advertised itself as being "AI-powered". I don't know how you advertise a printer to make it exciting to customers but this is just ridiculous. I'm glad that tech companies are finally finding out people won't magically buy their product if they call it AI-powered.

NPUs are just kind of weird and difficult to develop for and integration is usually done poorly.

Some useful applications do exist. Particularly grammar checkers and I think windows recall could be useful. But we don't currently have these designed well such that it makes sense.

  • A while ago I tried to figure out which APIs use the NPU and it was confusing to say the least.

    They have something called the Windows Copilot Runtime but that seems to be a blanket label and from their announcement I couldn't really figure out how the NPU ties into it. It seems like the NPU is used if it's there but isn't necessary for most things.

I already have experience with intermitent wipers, they are impossible to use reliably, a newer car I have made the intermitent wipers fully automatic, and impossible to dissable.Now they have figured out how to make intermitent wipers talk, and want to put them in everything. I forsee a future where humanity has total power and fine controll over reality, where finaly after hundreds of years, there is weather controll good enough to make it rain exactly the right amount for intermitent wipers to work properly, but we are not there yet.

People don't want AI PC, cause they don't want to spend 5000 bucks for something that's half as good as the free version of ChatGPT.

But we've been there before. Computers are going to get faster for cheaper, and LLMs are going to be more optimized, cause right now, they do a ton of useless calculations for sure.

There's a market, just not right now.

I have a "Copilot" button on my new ThinkPad. I have yet to understand what it does that necessitates a dedicated button.

On Linux it does nothing, on Windows it tells me I need an Office 365 plan to use it.

Like... What the hell... They literally placed a paywalled Windows only physical button on my laptop.

What next, an always-on screen for ads next to the trackpad?

  • It's equivalent to Win + Shift + F23 so you can map it to some useful action if you have a suitable utility at hand.

  • Good news: Office 365 has been renamed to Microsoft 365 Copilot.

    I'm serious. They dropped the Office branding and their office suite is now called Copilot.

    This is good news because it means the Copilot button opens Copilot, which is exactly what you'd expect it to do.

I'm kind of excited about the revival of XPS. The new hardware sounds pretty compelling. I have been longing for a macbook-quality device that I can run Linux on... so eagerly awaiting this.

  • I owned a couple XPS 13 laptops in a row and liked them a lot, until I got one with a touch bar. I returned it after a couple weeks and swapped over the to X1 Carbon.

    The return back to physical buttons makes the XPS look pretty appealing again.

    • This is exactly what I was hoping to see. I also returned one I ordered with the feedback that I needed physical function keys and the touchbar just wasn't cutting it for me.

  • Sweet, TIL!

    I love my 2020 XPS.

    The keyboard keys on mine do not rattle, but I have seen newer XPS keyboard keys that do rattle. I hope they fixed that.

I saw the latest xps laptops and I’m really intrigued… finally a high end laptop without an nvidia gpu!

It seems many products (PCs, TVs, cars, kitchen appliances, etc.) have transitioned from "solve for the customer" to "solve for ourselves (product manufacturers) and tell the customer it's for them, even though it's 99% value to us and 1% value to them".

This should have been obvious to anyone paying any attention whatsoever, long before any one of these computers launched as a product. But we can't make decisions on product or marketing based on reality or market fit. No, we have to make decisions on the investor buzzword faith market.

Hence the large percentage of Youtube ads I saw being "with a Dell AI PC, powered by Intel..." here are some lies.

The typical consumer doesn't care about any checkbox feature. They just care if they can play the games they care about and word/email/netflix.

That being said, netflix would be an impossible app without gfx acceleration APIs that are enabled by specific CPU and/or GPU instruction sets. The typical consumer doesn't care about those CPU/GPU instruction sets. At least they don't care to know about them. However they would care if they didn't exist and Netflix took 1 second per frame to render.

Similar to AI - they don't care about AI until some killer app that they DO care about needs local AI.

There is no such killer app. But they're coming. However as we turn the corner into 2026 it's becoming extremely clear that local AI is never going to be enough for the coming wave of AI requirements. AI is going to require 10-15 simultaneous LLM calls or GenAI requests. These are things that won't do well on local AI ever.

  • Even i3 cpu is perfectly fine software decoding 2160p H264, the only consequence is about 2x higher power draw compared to NVidia decoder.

Seems savvy of Dell. With empty AI hype now the default, saying the quiet part out loud is a way to stand out. Unfortunately, it doesn't mean Dell will stop taking MSFT's marketing money to pre-sell the Right-Ctrl key on my keyboard as the "CoPilot" key.

I wouldn't hate this so much if it was just a labeling thing. Unfortunately, MSFT changed how that key works at a low level so it cannot be cleanly remapped back to right-CTRL. This is because, unlike the CTRL, ALT, Shift and Windows keys, the now-CoPilot key no longer behaves like a modifier key. Now when you press the CoPilot key down it generates both key down and key up events - even when you keep it pressed down. You can work around this somewhat with clever key remapping in tools like AutoHotKey but it is literally impossible to fully restore that key back so it will behave like a true modifier key such as right-CTRL in all contexts. There are a limited number of true modifier keys built into a laptop. Stealing one of them to upsell a monetized service is shitty but intentionally preventing anyone from being able to restore it goes beyond shitty to just maliciously evil.

More technical detail: The CoPilot key is really sending: Shift+Alt+Win+Ctrl+F23 which Windows now uses as the shortcut to run the CoPilot application. When you remap the CoPilot key to right-Ctrl only the F23 is being remapped to right-Ctrl. Due to the way Windows works and because MSFT is now sending F23 DOWN and then F23 UP when the CoPilot key has only been pressed Down but not yet released, those other modifiers remain pressed down when our remapped key is sent. I don't know if this was intentional on MSFT's part to break full remapping or if it's a bug. Either way, it's certainly non-standard and completely unnecessary. It would still work for calling the CoPilot app to wait for the CoPilot key to be released to send the F23 KEY UP event. That's the standard method and would allow full remapping of the key.

But instead, when you press CoPilot after remapping it to Right-Ctrl... the keys actually being sent are: Shift+Alt+Win+Right-Ctrl (there are also some other keypresses in there that are masked). If your use case doesn't care that Shift, Alt and Win are also pressed with Right-Ctrl then it'll seem fine - but it isn't. Your CoPilot key remapped to Right-Ctrl no longer works like it did before or like Left-Ctrl still works (sending no other modifiers). Unfortunately, a lot of shortcuts (including several common Windows desktop shortcuts) involve Ctrl in combination with other modifiers. Those shortcuts still work with Left-Ctrl but not CoPilot remapped to Right-Ctrl. And there's no way to fix it with remapping (whether AutoHotKey, PowerToys, Registry Key, etc). It might be possible to fix it with a service running below the level of Windows with full admin control which intercepts the generated keys before Windows ever sees them - but as far as I know, no one has succeeded in creating that.

> "One thing you'll notice is the message we delivered around our products was not AI-first," Dell head of product, Kevin Terwilliger says with a smile. "So, a bit of a shift from a year ago where we were all about the AI PC."

> "We're very focused on delivering upon the AI capabilities of a device—in fact everything that we're announcing has an NPU in it—but what we've learned over the course of this year, especially from a consumer perspective, is they're not buying based on AI," Terwilliger says bluntly. "In fact I think AI probably confuses them more than it helps them understand a specific outcome."

He's talking about marketing. They're still gonna shove it into anything and everything they can. They just aren't gonna tell you about it.

WTF is an "AI PC"? Most of "AI" happens on the internet, in big datacenters, your PC has nothing to do with that. It will more likely confuse users who don't understand why they need a special PC when any PC can access chatgpt.com.

Now, for some who actually want to do AI locally, they are not going to look for "AI PCs". They are going to look for specific hardware, lots of RAM, big GPUs, etc... And it is not a very common use case anyways.

I have an "AI laptop", and even I, who run a local model from time to time and bought that PC with my own money don't know what it means, probably some matrix multiplication hardware that I have not idea how to take advantage of. It was a good deal for the specs it had, that's the only thing I cared for, the "AI" part was just noise.

At least a "gaming PC" means something. I expect high power, a good GPU, a CPU with good single-core performance, usually 16 to 32 GB of RAM, high refresh rate monitor, RGB lighting. But "AI PC", no idea.

  • AI PC in MS parlance is a computer with 40+ TOPS NPU built-in. Yes, they are intended for local AI applications.

> It's not that Dell doesn't care about AI or AI PCs anymore, it's just that over the past year or so it's come to realise that the consumer doesn't.

This seems like a cop out for saving cost by putting Intel GPUs in laptops instead of Nvidia.

  • Discrete GPU + laptop means 2 hours of battery life. The average customer isn't buying those.