Slacker News Slacker News logo featuring a lazy sloth with a folded newspaper hat
  • top
  • new
  • show
  • ask
  • jobs
Library
← Back to context

Comment by vessenes

2 months ago

Even presuming this is an accurate summary, the conclusion is not accurate - most local LLM inference users are constantly trading off quality for speed, in that speed drops dramatically once RAM is full. So, if you think of speed at desired quality, this could be very useful.

0 comments

vessenes

Reply

No comments yet

Contribute on Hacker News ↗

Slacker News

Product

  • API Reference
  • Hacker News RSS
  • Source on GitHub

Community

  • Support Ukraine
  • Equal Justice Initiative
  • GiveWell Charities