Comment by magic_hamster

21 hours ago

> gone are the days of PCIe.

My GPU, NVMe drives and motherboard might disagree.

The top Mac Studio has six thunderbolt 5 ports, each of which is a PCIe 4.0 x4 link. Each is a 8GB/sec link in each direction, which is a lot. Going from x16 down to x4 has less than a 10% hit on games: https://www.reddit.com/r/buildapc/comments/sbegpb/gpu_in_pci...

  • Your example uses GTX1080, which is a very old GPU. Current flagship consumer GPU will take a harder hit on low bandwidth PCIE.

    • Here’s more recent HW: https://www.pugetsystems.com/labs/articles/impact-of-gpu-pci...

      This is an RTX4080.

      “In the more common situations of reducing PCI-e bandwidth to PCI-e 4.0 x8 from 4.0 x16, there was little change in content creation performance: There was only an average decrease in scores of 3% for Video Editing and motion graphics. In more extreme situations (such as running at 4.0 x4 / 3.0 x8), this changed to an average performance reduction of 10%.”

      4 replies →

  • PCIe 4.0 x4 is going to be a huge bottleneck, even recent SSDs have more throughput (they use PCIe 5.0) never mind GPUs.

  • Gaming isn't what people are using Mac Studios for. Thunderbolt also isn't a substitute for OCuLink.

    • Sure, but it’s probably reflective of the fact that GPUs generally aren’t PCIe-bandwidth bound. Also, TB5 and Oculink2 both use PCI 4.0 x4 links.

      6 replies →

    • Um, I have an M3 Ultra 512GB on my desk for development. Love me some Baldur’s Gate 3, everything turned up to 11…

  • Yeah 80GB/s total I/O bandwidth is a lot for a Mac, but desktop PCs have been doing 1TB/s (128x PCIe5) for years (Threadripper etc).

    • Sure. And lots of people need all that I/O. But my point is that it’s not like the Mac Studio has no I/O. The outgoing Mac Pro only has 24 total lanes of PCIe 4.0 going to the switch chip that’s connected to all the PCI slots. The advent of externally route PCIe is a development in the last few years that may have factored into the change in form factor.

- GPU is integrated into the SoC - Surprisingly, it is possible to plug a drive into a TB/USB port

…so what do you actually need PCIe for?

  • High-end Macs have moved to PCIe 5.0 speeds in their internal drives. Thunderbolt 5 is not fast enough to get the same performance from external ones.

    Thunderbolt is also too slow for higher-end networks. A single port is already insufficient for 100-gigabit speeds.

    • When people talk about 100gigabit networks for Macs, im really curious what kind of network you run at home and how much money you spent on it. Even at work I’m generally seeing 10gigabit network ports with 100gigabit+ only in data centers where macs don’t have a presence

      6 replies →

  • To have lots of them plugged together, high end audio cards, electronics integrations, disks with having cables all over the place.

  • Things that aren’t graphics cards, such very high bandwidth video capture cards and any other equipment that needs a lot of lanes of PCI data at low latency.

  • but what about second GPU?

    • Multiple GPUs was tried, by the whole industry including Apple (most notably with the trash can Mac Pro). Despite significant investment, it was ultimately a failure for consumer workloads like gaming, and was relegated to the datacenter and some very high-end workstations depending on the workload.

      Multi-GPU has recently experienced a resurgence due to the discovery of new workloads with broader appeal (LLMs), but that's too new to have significantly influenced hardware architectures, and LLM inference isn't the most natural thing to scale across many GPUs. Everybody's still competing with more or less the architectures they had on hand when LLMs arrived, with new low-precision matrix math units squeezed in wherever room can be made. It's not at all clear yet what the long-term outcome will be in terms of the balance between local vs cloud compute for inference, whether there will be any local training/fine-tuning at all, and which use cases are ultimately profitable in the long run. All of that influences whether it would be worthwhile for Apple to abandon their current client-first architecture that standardizes on a single integrated GPU and omits/rejects the complexity of multi-GPU setups.