Comment by meiraleal
1 year ago
Who did you ask? ChatGPT? Not sure if you understand LLMs but its knowledge is based on the training data, it can't reason about itself, it can only hallucinate in this case, sometimes correctly, most times incorrectly.
This is also true for petty much all humans and bypassing this limitation is called enlightenment/self realization.
LLMs don't even have a self so it can never be realized. Just the ego alone exists.
No, humans can self inspect just fine
A lot of psychologists would quibble with that...
1 reply →
How do you know that?
1 reply →
Any evidence of that?
Have you seen the current US political system? Or Hawk Tua?