Intel Announces Arc Pro B70 and Arc Pro B65 GPUs

6 hours ago (techpowerup.com)

600 GB/s of memory bandwidth isn't anything to sneeze at.

~$1000 for the Pro B70, if Microcenter is to be believed:

https://www.microcenter.com/product/709007/intel-arc-pro-b70...

https://www.microcenter.com/product/708790/asrock-intel-arc-...

Any idea if it'll be possible to mix these with nvidia cards? Adding 32GB to a single 3090 setup would be pretty nice.

Where's the A310 / A40 successor? Gimme some SR-IOV in a slot-powered, single-width, low-profile card.

I think this shows a shift in model architecture. MOE and similar need more memory for the compute available than just one big model with a lot of layers and weights. I think this is likely a trend that will accelerate. You build the trade-off in which encourages even more experts which means more of a tradeoff, so more experts.....

  • Most people doing local inference run the MoE layers on CPU anyway, because decode is not compute constrained and wasting the high-bandwidth VRAM on unused weights is silly. It's better to use it for longer context. Recent architectures even offload the MoE experts to fast (PCIe x4 5.0 or similar performance) NVMe: it's slow but it opens up running even SOTA local MoE models on ordinary hardware.

32GB of vram for a decent price? I wonder if these will work well for VR, because vram is my current main issue.

  • (VR enthusiast here, mostly under windows)

    intel support has been mild to non existent in the VR space unfortunately. Given the very finicky latency + engine support i wouldn’t bet on a great experience, but hope for the best for more competition in this market. (even amd has a lot of caveats comparing to nvidia)

    Footnotes:

    * critical "as low as it can be" low latency support on intel XE is still not as mature as nvidia, amd was lagging behind until recently.

    * Not sure about "multiprojection" rendering support on intel, lack of support can kill vr performance or make it incompatible. (the optimized vr games often rely on it)

    • It looked like when Intel jumped into this space, they tried to do everything at once. It didnt work well, they were playing catch up to some very mature systems. They are now being much more selective and restrained. The down side is that things like VR support are put on the back burner for years.

      Good for most people but if you need that fuctiobality and they dont have it, go somewhere else.

Anyone running an ARC card for desktop Linux who can comment on the experience? I've had smooth sailing with AMD GPU's but have never tried Intel.

  • Running dual Pro B60 on Debian stable mostly for AI coding.

    I was initially confused what packages were needed (backports kernel + ubuntu kobuk team ppa worksforme). After getting that right I'm now running vllm mostly without issues (though I don't run it 24/7).

    At first had major issues with model quality but the vllm xpu guys fixed it fast.

    Software capability not as good as nvidia yet (i.e. no fp8 kv cache support last I checked) but with this price difference I don't care. I can basically run a small fp8 local model with almost 100k token context and that's what I wanted.

  • Afaik driver support is very complete on Linux. You often see Arc GPUs used in media transcoding workloads for that reason.

    • We can all agree that Intel absolutely nailed it with the media encoding on these things. A nice to have for many, vital for others.

  • Ive ran arc on fedora for years and for general desktop use it’s been perfect. For llm’s/coding it’s getting better but it’s rough around the edges. Had a bug where trying to get vram usage through pytorch would crash the system, ect.

  • My B580 works fine on Linux. Graphics perf is a bit worse than under Windows, but supposedly compute is pretty much the same.

Wake me when they wake up and release a middling card with 128GB memory.

Since they fired the entire Arc team and a lot of the senior engineers already updated their Linkedins to reflect their new positions at AMD, Nvidia, and others, as well as laying off most of their Linux driver team (GPU and non-GPU), uh...

WTF?

  • You are exaggerating, right? They didn't really fire the entire Arc team did they? I couldn't find a source saying that.

    • Nope, no exaggeration.

      The news that Celestial is basically canceled already hit the HN front page, as well as Druid has been canceled before tapeout.

      Celestial will only be issued in the variant that comes in budget/industrial embedded Intel platforms that have a combined IO+GPU tile, but the performance big boy desktop/laptop parts that have a dedicated graphics tile will ship an Nvidia-produced tile.

      There will be no Celestial DGPU variant, nor dedicated tile variant. Drivers will be ceasing support for DGPUs of all flavors, and no new bug fixes will happen for B series GPUs (as there is no B series IGPUs; A series IGPUs will remain unaffected).

      They signed the deal like 2-3 months ago to cancel GPUs in favor of Nvidia. The other end of this deal is the Nvidia SBCs in the future will be shipping as big-boy variants with Xeon CPUs, Rubin (replacing Blackwell) for the GPU, Vera (replacing Grace) for the on-SBC GPU babysitter, and newest gen Xeons to do the non-inference tasks that Grace can't handle.

      There is also talk that this deal may lead to Nvidia moving to Intel Foundry, away from TSMC. There is also talk that Nvidia may just buy Intel entirely.

      For further information, see Moore's Law Is Dead's coverage off and on over the past year.

      3 replies →

  • This is a chip they've had lying around for a while. It's the same architecture as used in the Arc B580 that launched at the end of 2024; this is just a slightly larger sibling. Intel clearly knew that their larger part wouldn't make for a competitive gaming GPU (hence the lack of a consumer counterpart to these cards), but must have decided that a relatively cheap workstation card with 32GB might be able to make some money.

  • I didn't know this. Have they officially given up on building discrete GPUs? Is this a last gasp of Arc to offload decent remaining architectures at a lower price than nvidia?

    It is crazy to me that a world newly craving GPU architecture for AI, and gamers being largely neglected, that Intel would abandon an established product line.

    • > It is crazy to me that a world newly craving GPU architecture for AI, and gamers being largely neglected, that Intel would abandon an established product line.

      You still need to fab it somewhere. Intel's fabs have been plagued with issues for years, the AI grifters have bought up a lot of TSMCs allotments and what remains got bought up by Apple for their iOS and macOS lineups, and Samsung's fabs are busy doing Samsung SoCs.

      And that unfortunately may explain why Intel yanked everything. What use is a product line that can't be sold because you can't get it produced?

      Yet another item on my long list of "why I want to see the AI grift industry burn and the major participants rotting in a prison cell".

Not sure why you'd want this over an apple setup. M4 max is 545GB/s of memory bandwidth - $2k for an entire Mac Studio with 48GB of RAM vs 32 for the B70.

  • My thinking is that I'd pick this, because I can't just plug a Mac into a slot in my server and have it easily integrate with all my other hardware across an ultra fast bus.

    If they made an M4 on a card that supported all the same standards and was price competitive, though, that might be a good option.

  • Being able to keep infrastructure on Linux is a big advantage.

    • How many compatibility issues is MacOS realistically expected to spur? Windows DX felt unusable to me without a Linux VM (and later WSL), but on MacOS most tooling just kinda seems to work the same.

      4 replies →

  • with those $2k you can have 2xB70, with 1.2Tb/sec and 64G Vram, on linux ( and you can scale further while mac prices increase are not linear 0

    • You're absolutely right. And these Intel GPUs will also be much faster in terms of actual math than the M series GPUs that the Apple setup would have.

  • Support for Single Root IO Virtualization (SR-IOV) to enable compute and Graphics workloads in virtualized environments.

  • one can upgrade and swap parts with a computer running an Intel GPU. Linux is very well supported compared to Mac hardware.