← Back to context

Comment by marak830

19 hours ago

I run a separate memory layer between my local and my chat.

Without a ton of hassle I cannot do that with a public model(without paying API pricing).

My responses may be slower, but I know the historical context is going to be there. As well as the model overrides.

In addition I can bolt on modules as I feel like it(voice, avatar, silly tavern to list a few).

I get to control my model by selecting specific ones for tasks, I can upgrade as they are released.

These are the reasons I use local.

I do use Claude for a coding junior so I can assign tasks and review it, purely because I do not have something that can replicate that locally on my setup(hardware wise, but from what I have read local coding models are not matching Claude yet)

That's more than likely a temporary issue(years not weeks with the expensive of things and state of open models specialising in coding).