← Back to context

Comment by selfhoster11

2 days ago

Far less than you'd think for local LLMs.

Local LLMs that you can run on consumer hardware don't really do anything though. They are amusing, maybe you could use them for basic text search, but they don't have any real knowledge like the hosted ones do.

  • Gemma 3 27B, some smaller models in the 8-16B size range, and up to 32B can be run on hardware that fits in the "consumer" bracket. RAM is more expensive now, but most people can afford a machine with 32GB and maybe a small graphics card.

    Small models don't have as much world knowledge as very large models (proprietary or open source ones), but it's not always needed. They still can do a lot of stuff. OCR and image captioning, tagging, following well-defined instructions, general chat, some coding, are all things local models do pretty well.

    Edit: fixed unnecessarily abrasive wording