← Back to context

Comment by Vetch

5 days ago

This is an interesting proposition. Have you tested this with the best open LLMs?

Yes; in fact, many people "test" this every day, by accident, while trying to set up popular instruction-following models for "roleplaying" purposes, through UIs like SillyTavern.

Open models are almost always remotely hosted (or run locally) through a pure text-completion API. If you want chat, the client interacting with that text-completion API is expected to be the business layer, either literally (with that client in turn being a server exposing a chat-completion API) or in the sense of vertically integrating the chat-message-stream-structuring business-logic, logit-bias specification, early stream termination on state change, etc. into the completion-service abstraction-layer of the ultimate client application.

In either case, any slip-up in the business-layer configuration — which is common, as these models all often use different end-of-conversation-step sequences, and don't document them well — can and does result in seeing "under the covers" of these models.

This is also taken advantage of on purpose in some applications. In the aforementioned SillyTavern client, there is an "impersonate" command, which intentionally sets up the context to have the agent generate (or finish) the next human conversation step, rather than the next agent conversation step.