Comment by winddude
6 hours ago
> This has very little to do with someone making the LLM too human but rather a core limitation of the transformer architecture itself.
It has almost everything to do with it. Models have been fine-tuned to generate outputs that humans prefer.
No comments yet
Contribute on Hacker News ↗