← Back to context

Comment by mystraline

8 months ago

Almost?

I've been running a programming LLM locally, with a 200k context length with using system ram.

Its also an abliterated model, so I get none of the moralizing or forced ethics either. I ask, and it answers.

I even have it hooked up to my HomeAssistant, and can trigger complex actions from there.