Comment by vessenes
19 hours ago
Even presuming this is an accurate summary, the conclusion is not accurate - most local LLM inference users are constantly trading off quality for speed, in that speed drops dramatically once RAM is full. So, if you think of speed at desired quality, this could be very useful.
No comments yet
Contribute on Hacker News ↗