Comment by Twirrim

6 days ago

I've been using opencode pointing to the local model running llama.cpp.

The last thing I was having it build is a rust based app that essentially pulls data from a set of APIs every 2 minutes, processes it and stores the data in a local database, with a half hourly task that does further analysis. It has done a decent job.

It's definitely not as fast or as good as large online models, but it's fast enough and good enough, and using hardware I already had spare.