Comment by mholt
5 days ago
I dunno. M1 might be fine, I just haven't tested it with an M1. If you enable semantic features, it does use a large model (not necessarily an LLM) for embeddings; but regardless it will generate thumbnails for images and videos, and transcode videos for playback, so a good GPU is helpful.
No comments yet
Contribute on Hacker News ↗