Comment by BoorishBears

2 years ago

I thought we were done when I demonstrated GPT 4 can continue a completion contrary to your belief, but here you are throwing a tantrum several comments later.

> GPT 4 can continue a completion contrary to your belief

When did I say that? I said they work differently. Claude has nothing in between the prefill and the result, OpenAI has tokens between the last assistant message and the result, this makes it different. You cannot prefill in OpenAI, Claude's prefill is powerful as it effectively allows you to use it as general completion model, not a chat model. OpenAI does not let you do this with GPT.

  • a) gpt-3.5-turbo has a completion endpoint version as of June: `gpt-3.5-turbo-instruct`

    b) Even the chat tuned version does completions, if you go via Azure and use ChatML you can confirm it for yourself. They trained the later checkpoints to do a better job at restarting from scratch if the output doesn't match it's typical output format to avoid red teaming techniques.

    What you keep going on about is the <|im_start|> token... which is functionally identical to the `Human:` message for Anthropic.

    • > a) gpt-3.5-turbo has a completion endpoint version as of June: `gpt-3.5-turbo-instruct`

      We were not talking about that model and I'm 99.999% sure you do not use that model. You might as well mention text-davinci-003 and all the legacy models, you're muddying the waters.

      > b) Even the chat tuned version does completions, if you go via Azure and use ChatML you can confirm it for yourself. They trained the later checkpoints to do a better job at restarting from scratch if the output doesn't match it's typical output format to avoid red teaming techniques.

      Don't fucking say "even", I know you know I know it can technically do completions as it is just GPT, the issue is what they do with the prompt in the backend.

      I do not have Azure to test it, that is interesting but how come you're only mentioning it now? That's more interesting. Anyway, are you sure you can actually prefill with it? You saying that it restarts from scratch tells me it either isn't actually prefilling (and doing a completion) or that there are filters on top which makes it a moot point.

      The documentation doesn't mention prefilling or similar but it does say this: This provides lower level access than the dedicated Chat Completion API, but also [...] only supports gpt-35-turbo models [...]

      Shame.

      > What you keep going on about is the <|im_start|> token... which is functionally identical to the `Human:` message for Anthropic.

      Now you got it? Jesus Christ, but also no, I mean "\n\nAssistant:" which is not added on in Anthropic's backend like OpenAI does, you have to do it yourself as stated in the Anthropic docs which means you can use it as a completion model as stated in the Anthropic docs, which makes it trivial to bypass any and all refusals.

      2 replies →