Comment by Twirrim
1 month ago
Just in case anyone hasn't seen this yet:
https://github.com/ggml-org/llama.cpp/discussions/15396 a guide for running gpt-oss on llama-server, with settings for various amounts of GPU memory, from 8GB on up
1 month ago
Just in case anyone hasn't seen this yet:
https://github.com/ggml-org/llama.cpp/discussions/15396 a guide for running gpt-oss on llama-server, with settings for various amounts of GPU memory, from 8GB on up
No comments yet
Contribute on Hacker News ↗