← Back to context Comment by DeathArrow 7 hours ago Yep, maybe I can open a feature request if it makes sense technically. 1 comment DeathArrow Reply zozbot234 6 hours ago Arguably it makes more sense technically to get the model support into llama.cpp, which provides many options for GPU+CPU split inference already.
zozbot234 6 hours ago Arguably it makes more sense technically to get the model support into llama.cpp, which provides many options for GPU+CPU split inference already.
Arguably it makes more sense technically to get the model support into llama.cpp, which provides many options for GPU+CPU split inference already.