Comment by MrNeon
2 years ago
> Are you going to have your user
What fucking user, man? Is it not painfully clear I never spoke in the context of deploying applications?
Your issues with this level of prefilling in the context of deployed apps ARE valid but I have no interest in discussing that specific use case and you really should have realized your arguments were context dependent and not actual rebuttals to what I claimed at the start several comments ago.
Are we done?
I thought we were done when I demonstrated GPT 4 can continue a completion contrary to your belief, but here you are throwing a tantrum several comments later.
> GPT 4 can continue a completion contrary to your belief
When did I say that? I said they work differently. Claude has nothing in between the prefill and the result, OpenAI has tokens between the last assistant message and the result, this makes it different. You cannot prefill in OpenAI, Claude's prefill is powerful as it effectively allows you to use it as general completion model, not a chat model. OpenAI does not let you do this with GPT.
a) gpt-3.5-turbo has a completion endpoint version as of June: `gpt-3.5-turbo-instruct`
b) Even the chat tuned version does completions, if you go via Azure and use ChatML you can confirm it for yourself. They trained the later checkpoints to do a better job at restarting from scratch if the output doesn't match it's typical output format to avoid red teaming techniques.
What you keep going on about is the <|im_start|> token... which is functionally identical to the `Human:` message for Anthropic.
3 replies →