Comment by bigyabai
7 hours ago
For FP16-native training of 100B+ models, you will probably still be offloading to swap unless you've got a $150,000 RDMA Mac Studio cluster. The workload would be deeply compute-constrained if you could fit it in-memory anyways.
No comments yet
Contribute on Hacker News ↗