Humans update their model of the world as they receive new information.
LLMs have static weights, therefore they cannot not have a concept of truth. If the world changes, they insist on the information that was in their training data. There is nothing that forces an LLM to follow reality.
Humans update their model of the world as they receive new information.
LLMs have static weights, therefore they cannot not have a concept of truth. If the world changes, they insist on the information that was in their training data. There is nothing that forces an LLM to follow reality.
what about a person with short term memory?
Whataboutism is almost never a compelling argument, and this case is no exception.
ETA:
To elaborate a bit: based on your response, it seems like you don't think my question is a valid one.
If you don't think it's a valid question, I'm curious to know why not.
If you do think it's a valid question, I'm curious to know your answer.
its not whataboutism, i'm simply asking how you would perform the same test for a human. then we can see if it applies or not to chatgpt?
I don't know. What is your answer to my question?
1 reply →