Comment by Aurornis
2 years ago
> However because it’s so easy to fix this is not an issue and it doesn’t slow me down at all.
But that's a different issue than LLM hallucinations.
With Swype, you already know what the correct output looks like. If the output doesn't match what you wanted, you immediately understand and fix it.
When you ask an LLM a question, you don't necessarily know the right answer. If the output looks confident enough, people take it as the truth. Outside of experimenting and testing, people aren't using LLMs to ask questions for which they already know the correct answer.
No comments yet
Contribute on Hacker News ↗