Comment by lobochrome
2 months ago
"LLM just complete your prompt in a way that match their training data"
"A LLM is smart enough to understand this"
It feels like you're contradicting yourself. Is it _just_ completing your prompt, or is it _smart_ enough?
Do we know if conscious thought isn't just predicting the next token?
No comments yet
Contribute on Hacker News ↗