Comment by selcuka
7 months ago
> The most obvious way to adjust the behavior of a LLM is fine-tuning.
Yes, but fine-tuning is expensive. It's also permanent. System prompts can be changed on a whim.
How would you change "today's date" by fine-tuning, for example? What about adding a new tool? What about immediately censoring a sensitive subject?
Anthropic actually publishes their system prompts [1], so it's a document method of changing model behaviour.
[1] https://docs.anthropic.com/en/release-notes/system-prompts
> https://docs.anthropic.com/en/release-notes/system-prompts
Honestly I'm surprised that they use such a long prompt. It boggles my mind why they choose to chew through the context window length.
I've been training DNN models at my job past a few years, but would never use something like this.
Note that these are only used for chat. As far as I understand there are no built-in system prompts when you use their APIs (or maybe they have different, smaller system prompts).
I guess the rationale is that the end users of chat are not trusted to get their prompts right, thus the system prompt.