Comment by alok-g
4 months ago
Newbie comment: I am finding PyTorch over CUDA to be so bloated. The cumulative sizes of the of the binaries + drivers is often adding to more than a baseline LLM model that carries so much knowledge. And now, each new AI app is packaging its dependencies (e.g., StemRoller, Jan.ai, etc.) and thereby occupying GBs of disk space each. I am yet to look into LLM.c and Llama.cpp stuff.
No comments yet
Contribute on Hacker News ↗