← Back to context

Comment by MrNeon

2 years ago

Luckily, unlike OpenAI, Anthropic lets you prefill Claude's response which means zero refusals.

OpenAI allows the same via API usage, and unlike Claude it *won't dramatically degrade performance or outright interrupt its own output if you do that.

It's impressively bad at times: using it for threat analysis I had it adhering to a JSON schema, and with OpenAI I know if the output adheres to the schema, there's no refusal.

Claude would adhere and then randomly return disclaimers inside of the JSON object then start returning half blanked strings.

  • > OpenAI allows the same via API usage

    I really don't think so unless I missed something. You can put an assistant message at the end but it won't continue directly from that, there will be special tokens in between which makes it different from Claude's prefill.

    • It's a distinction without meaning once you know how it works

      For example, if you give Claude and OpenAI a JSON key

      ```

          {
      
           "hello": "
      

      ```

      Claude will continue, while GPT 3.5/4 will start the key over again.

      But give both a valid output

      ```

          {
          
           "hello": "value",
      

      ```

      And they'll both continue the output from the next key, with GPT 3.5/4 doing a much better job adhering to the schema

      15 replies →