← Back to context

Comment by cloudbonsai

7 months ago

I don't understand this at all. What this post suggests seems illogical to me:

- The most obvious way to adjust the behavior of a LLM is fine-tuning. You prepare a carefully-curated dataset, and perform training on it for a few epoch.

- This is far more reliable than appending some wishy-washy text to every request. It's far more economical too.

- Even when you want some "toggle" to adjust the model behavior, there is no reason to use a verbose human-readable text. All you need is a special token such as `<humorous>` or `<image-support>`.

So I don't think this post is genuine. People are just fooling themselves.

> The most obvious way to adjust the behavior of a LLM is fine-tuning.

Yes, but fine-tuning is expensive. It's also permanent. System prompts can be changed on a whim.

How would you change "today's date" by fine-tuning, for example? What about adding a new tool? What about immediately censoring a sensitive subject?

Anthropic actually publishes their system prompts [1], so it's a document method of changing model behaviour.

[1] https://docs.anthropic.com/en/release-notes/system-prompts

  • > https://docs.anthropic.com/en/release-notes/system-prompts

    Honestly I'm surprised that they use such a long prompt. It boggles my mind why they choose to chew through the context window length.

    I've been training DNN models at my job past a few years, but would never use something like this.

    • Note that these are only used for chat. As far as I understand there are no built-in system prompts when you use their APIs (or maybe they have different, smaller system prompts).

      I guess the rationale is that the end users of chat are not trusted to get their prompts right, thus the system prompt.