← Back to context

Comment by BoorishBears

2 years ago

OpenAI allows the same via API usage, and unlike Claude it *won't dramatically degrade performance or outright interrupt its own output if you do that.

It's impressively bad at times: using it for threat analysis I had it adhering to a JSON schema, and with OpenAI I know if the output adheres to the schema, there's no refusal.

Claude would adhere and then randomly return disclaimers inside of the JSON object then start returning half blanked strings.

> OpenAI allows the same via API usage

I really don't think so unless I missed something. You can put an assistant message at the end but it won't continue directly from that, there will be special tokens in between which makes it different from Claude's prefill.

  • It's a distinction without meaning once you know how it works

    For example, if you give Claude and OpenAI a JSON key

    ```

        {
    
         "hello": "
    

    ```

    Claude will continue, while GPT 3.5/4 will start the key over again.

    But give both a valid output

    ```

        {
        
         "hello": "value",
    

    ```

    And they'll both continue the output from the next key, with GPT 3.5/4 doing a much better job adhering to the schema

    • > It's a distinction without meaning once you know how it works

      But I do know how it works, I even said how it works.

      The distinction is not without meaning because Claude's prefill allows bypassing all refusals while GPT's continuation does not. It is fundamentally different.

      14 replies →