← Back to context Comment by manmal 7 hours ago It wouldn’t be useful with your setup, probably 3-4 token per second. 2 comments manmal Reply DeathArrow 6 hours ago Yep, maybe I can open a feature request if it makes sense technically. zozbot234 5 hours ago Arguably it makes more sense technically to get the model support into llama.cpp, which provides many options for GPU+CPU split inference already.
DeathArrow 6 hours ago Yep, maybe I can open a feature request if it makes sense technically. zozbot234 5 hours ago Arguably it makes more sense technically to get the model support into llama.cpp, which provides many options for GPU+CPU split inference already.
zozbot234 5 hours ago Arguably it makes more sense technically to get the model support into llama.cpp, which provides many options for GPU+CPU split inference already.
Yep, maybe I can open a feature request if it makes sense technically.
Arguably it makes more sense technically to get the model support into llama.cpp, which provides many options for GPU+CPU split inference already.