Comment by deepsquirrelnet
7 hours ago
One of the issues with using LLMs in content generation is that instruction tuning causes mode collapse. For example, if you ask an LLM to generate a random number between 1 and 10, it might pick something like 7 80% of the time. Base models do not exhibit the same behavior.
“Creative Output” has an entirely different meaning when you start to think about them in the way they actually work.
No comments yet
Contribute on Hacker News ↗