← Back to context Comment by am17an 2 days ago Honestly you can run this on a 16GB VRAM GPU with llama.cpp. Just try it! 0 comments am17an Reply No comments yet Contribute on Hacker News ↗
No comments yet
Contribute on Hacker News ↗