Comment by xarope
5 months ago
what I'd like LLMs to do is present examples using acceptable design standards, e.g. whats the pythonic way to do this, and what are exceptions that might yield better performance/optimization (at what does it cost), or what is the best go(lang) JSON parser (since the built-in isn't very good).
But instead, I get average to below-average examples (surprise surprise, this is what happens when you train on a high noise-to-signal set of data), which are either subtly or wildly incorrect. I can't see this improving, with reddit and other forums trying to introduce AI bot written posts. Surely these companies are aware of how LLM output degenerates when fed its own input within a few (not even dozen) generations?!?
No comments yet
Contribute on Hacker News ↗