Comment by sweetheart
2 years ago
> secretly pre-adjust outcomes without disclosure.
Isn't that the whole training process though. Unless you know of every piece of data used to train it, and how each of those data was prepared, you have to assume that any LLM you use is coming with a particular viewpoint baked in.
No comments yet
Contribute on Hacker News ↗