Comment by wewewedxfgdf

2 days ago

Why don't they just put ram slots on the card so you can augment the fast ram

Speed and reliability. A connector of any kind reduces signal quality. Data lines need to be longer, because the memory slot won't fit under the radiator where the memory chips are now, and that adds even more electrical interference and degrades signal.

Also, we had memory slots on '90s cards. They were extremely expensive and proprietary. Ever saw a Matrox VRAM card? I never did.

  • > A connector of any kind reduces signal quality.

    Like the M.2 connector?

    > Data lines need to be longer

    Like the data lines going all the way to an on-motherboard storage device?

    • Soldered stuff is still dramatically better than the M2 connector (than any connector really). You've never wondered why RAM doesn't use PCI Express?

    • > Like the M.2 connector?

      Yes, though likely something with a higher pin count since memory access is more likely to be random and can be parallel versus block storage.

      > Like the data lines going all the way to an on-motherboard storage device?

      Yes. Why would a GPU manufacturer/packager take on that cost, if it’s presently served well enough for most people by offloading it onto other parts of the system?

    • The current DIMM and SODIMM modules cannot be used for much higher speeds than are available now.

      This is why there are several proposals of improved forms for memory modules, which use different sockets, like LPCAMM2, which should be able to work with faster memories.

      However even LPCAMM2 is unlikely to work at the speeds of soldered GDDR7.

      2 replies →

    • Yes and yes. NVMe storage is very slow, so it can get away with such things.

  • I am hoping that we seriously evolve the ATX standard to allow for a socketed GPU board that can also enable user replaceable memory. Seeing an enormous GPU that is larger than the motherboard itself hanging from a PCI slot feels like horse and buggy shit. I'm imaging two boards back-to-back connected by a central high bandwidth bus (which could also do power delivery) that would allow one side of the case to be for CPU/RAM and the other side to be for GPU/VRAM.

    • Your solution only allows for one GPU, maybe two if the motherboard is really huge, and it doesn't really solve the slotted VRAM problem.

      PCI is (was) allowed to be even longer. Old AT and ATX cases had a slotted support bracket to hold the far end of the PCI cards. See how an Adaptec 2400A looks like.

GDDR7x doesn't come in dimm factor?

In general soldered ram seems to get much higher bandwidth than removeable ram. See ryzen AI Max vs 9950x max ram throughputfor example

  • Strix Halo uses a 256bit memory interface, the normal desktop processors only have a 128bit interface, that's the biggest difference in bandwidth. For more bandwidth you need to go to a Threadripper.

    Strix Halo seems to use LPDDR with 8000 MT/s, which is a bit faster than the usual 5600 MT/s-6400 MT/s "normal" DDR5-DIMMs (Albeit (expensive) faster ones seem to exist), so there's a slight edge towards soldered memory (not sure about LPCAMM2 and similar tech).

    GDDR7 is a different league, a 5070 Ti also has a 256bit memory interface, but has 896GB/s bandwidth, compared to strix halo with 256GB/s

    • It's really hard to push DDR5 past 6000MT/s on 4+ DIMMs it seems.

      I had to get everything top spec to fit 4 channels of 6000MT/s on my 9950x (asus proArt motherboard and the top tier trident neo RAM sticks) -- otherwise it's reportedly unstable.

      2 replies →

  • No.

    All GDDR memory is intended only for being soldered around a GPU chip, on the same PCB. This is how they achieve a memory throughput that is 4 to 8 times higher than the DDR memories used in DIMMs or SODIMMs.