← Back to context

Comment by razster

4 hours ago

I run a local model on the daily. I have it making tickets when certain emails come in and made a small that I can click to approve ticket creation. It follows my instructions and has a nice chain of thought process trained. Local LLMs are starting to become very useful. Not OpenClaw crap.

What vram you running to allow both a capable model to run and also everything else the device needs to run?