Comment by simonh
3 hours ago
It is possible, but it requires specifically labelling the data. You have to craft question response pairs to label. But even then the result is only probabilistic.
The LLM in this case had been very thoroughly trained and instructed quite specifically not to do many of the things it actually then when off and did.
It may be that there's a kind of cascade effect going on here. Possibly once the LLM breaks one rule it's supposed to follow, this sets it off on a pattern of rule violations. After all what constitutes a rule violation is there in the training set, it is a type of token stream the LLM has been trained on. It could be the LLM switches into a kind of black hat mode once it's violated a protocol that leads it down a path of persistently violating protocols, and given the statistical model some violations of protocol are always possible.
My mother was a primary school teacher. She used to say that the worst thing you can say to a bunch of kind leaving class down the hall is "don't run in the hall". It puts it in their minds. You need to say "Please walk in the hall", then they'll do it.
No comments yet
Contribute on Hacker News ↗