Comment by labrador
1 day ago
This is not surprising. The training data likely contains many instances of employees defending themselves and getting supportive comments. From Reddit for example. The training data also likely contains many instances of employees behaving badly and being criticized by people. Your prompts are steering the LLM to those different parts of the training.
You seem to think an LLM should have a consistent world view, like a responsible person might. This is a fundamental misunderstanding that leads to the confusion you are experiencing.
Lesson: Don't expect LLMs to be consistent. Don't rely on them for important things thinking they are.
No comments yet
Contribute on Hacker News ↗