OpenAI allows the same via API usage, and unlike Claude it *won't dramatically degrade performance or outright interrupt its own output if you do that.
It's impressively bad at times: using it for threat analysis I had it adhering to a JSON schema, and with OpenAI I know if the output adheres to the schema, there's no refusal.
Claude would adhere and then randomly return disclaimers inside of the JSON object then start returning half blanked strings.
I really don't think so unless I missed something. You can put an assistant message at the end but it won't continue directly from that, there will be special tokens in between which makes it different from Claude's prefill.
OpenAI allows the same via API usage, and unlike Claude it *won't dramatically degrade performance or outright interrupt its own output if you do that.
It's impressively bad at times: using it for threat analysis I had it adhering to a JSON schema, and with OpenAI I know if the output adheres to the schema, there's no refusal.
Claude would adhere and then randomly return disclaimers inside of the JSON object then start returning half blanked strings.
> OpenAI allows the same via API usage
I really don't think so unless I missed something. You can put an assistant message at the end but it won't continue directly from that, there will be special tokens in between which makes it different from Claude's prefill.
It's a distinction without meaning once you know how it works
For example, if you give Claude and OpenAI a JSON key
```
```
Claude will continue, while GPT 3.5/4 will start the key over again.
But give both a valid output
```
```
And they'll both continue the output from the next key, with GPT 3.5/4 doing a much better job adhering to the schema
15 replies →
Can you give an example in how Anthropic and OpenAI differ in that?
From Anthropic's docs: https://docs.anthropic.com/claude/docs/configuring-gpt-promp...
In OpenAI's case their "\n\nAssistant:" equivalent is added server side with no option to prefill the response.