← Back to context

Comment by DiabloD3

17 hours ago

So, from time to time I'll try the new frontier research models. Not being held down by shitty quants, bizarre sampler settings, and weird context settings vastly improves output quality over whatever all the commercial services are doing; plus having an actual copy of the weights means I can have consistent service quality.

Problem is, a good LLM reproduces its training as verbatim as the prompt and quant quality allows. Like, thats its entire purpose. It gives you more of what you already have.

Most of these models are trained on unvetted inputs. They will reproduce bad inputs, but do so well. They do not comprehend anything you're saying to them. They are not a reasoning machine, they are a reproduction machine.

Just because I can get better quality inferring locally doesn't mean it stops being an LLM. I don't want a better LLM, I want a machine that can actually reason effectively.