Comment by sim7c00
6 months ago
it seems more looping possible answers, just going back to same bad answers hoping you forgot you gave it. we're training incredibly expensive and eloquent goldfish.
maybe this is the effect of the LLMs interacting with eachother, the dumbing down. gpt-6 will be a markov chain again and gpt-7 will know that f!sh go m00!
No comments yet
Contribute on Hacker News ↗