Comment by user3939382
3 hours ago
Trouble is an LLM can test for something being logical in isolation, or coherent unto itself. It’s much weaker at anticipating what will be meaningful to other people which is usually what people are actually looking for.
No comments yet
Contribute on Hacker News ↗