← Back to context

Comment by jwpapi

8 hours ago

Are you aware that LLms are still the same autocomplete just with different token decisions more data better pre and post training and settings

We have all the data now.

I don’t see where the huge gap should come from, as one person before they said they still make basic errors.

Models got better for a bunch of soft tuning. Language and abstractness is not really the same thing there are a lot of very good speakers that are terrible in logic and abstractness.

Thinking abstract sometimes makes it necessary to leave language and draw or som people even code in another coding language to get it.

We’ve seen it with the compiler project it’s nice looking but if you would want to make a competitive compiler you would be as far as starting fresh