No complaints here, I use a Framework Desktop with this chip. 32G given to RAM and the rest plays VRAM. Can use large models like 'gpt-oss:120b' fine. Splurged and got a second SSD for mirroring, hoping to speed up reads/model loads. Haven't tested this for efficacy, but it also gives redundancy. Shrugs!
Haven't paid a subscription in years or even signed up for $EMPLOYER offerings; handles the rare outsourcing well enough.
Also interested.
No complaints here, I use a Framework Desktop with this chip. 32G given to RAM and the rest plays VRAM. Can use large models like 'gpt-oss:120b' fine. Splurged and got a second SSD for mirroring, hoping to speed up reads/model loads. Haven't tested this for efficacy, but it also gives redundancy. Shrugs!
Haven't paid a subscription in years or even signed up for $EMPLOYER offerings; handles the rare outsourcing well enough.