A lot of things are "so much faster" than the right thing. "Vibe traffic safety laws" are much faster than ones that increase actual traffic safety: http://propublica.org/article/trump-artificial-intelligence-... . You, your team, and colleagues are producing shiny trash at unbelievable velocity. Is that valuable?
Think of the LLM as a slightly lossy compression algorithm fed by various pattern classifiers that weight and bin inputs and outputs.
The user of the LLM provides a new input, which might or might not closely match the existing smudged together inputs to produce an output that's in the same general pattern as the outputs which would be expected among the training dataset.
> It's so much faster.
A lot of things are "so much faster" than the right thing. "Vibe traffic safety laws" are much faster than ones that increase actual traffic safety: http://propublica.org/article/trump-artificial-intelligence-... . You, your team, and colleagues are producing shiny trash at unbelievable velocity. Is that valuable?
If I may ask, does the code produced by LLM follow best practices or patterns? What mental model do you use to understand or comprehend your codebase?
Please know that I am asking as I am curious and do not intend to be disrespectful.
Think of the LLM as a slightly lossy compression algorithm fed by various pattern classifiers that weight and bin inputs and outputs.
The user of the LLM provides a new input, which might or might not closely match the existing smudged together inputs to produce an output that's in the same general pattern as the outputs which would be expected among the training dataset.
We aren't anywhere near general intelligence yet.