← Back to context

Comment by meiraleal

1 year ago

Who did you ask? ChatGPT? Not sure if you understand LLMs but its knowledge is based on the training data, it can't reason about itself, it can only hallucinate in this case, sometimes correctly, most times incorrectly.

This is also true for petty much all humans and bypassing this limitation is called enlightenment/self realization.

LLMs don't even have a self so it can never be realized. Just the ego alone exists.