Comment by bossyTeacher
7 months ago
Your question is a specific form of the more general question: can LLMs behave in ways that were not encoded in their training data?
That leads to what "encoding behaviour" actually means. Even if you don't have a a specific behaviour encoded in the training data you could have it implicitly encoded or encoded in such a way that given the right conversation it can learn it.
No comments yet
Contribute on Hacker News ↗