Comment by zozbot234
7 hours ago
Arguably it makes more sense technically to get the model support into llama.cpp, which provides many options for GPU+CPU split inference already.
7 hours ago
Arguably it makes more sense technically to get the model support into llama.cpp, which provides many options for GPU+CPU split inference already.
No comments yet
Contribute on Hacker News ↗