Comment by orf
14 hours ago
In case you missed it
OpenAI does publish advice on prompting o1,
but we find it incomplete, and in a sense you can
view this article as a “Missing Manual” to lived
experience using o1 and o1 pro in practice.
The last line is important
But extraordinary claims require extraordinary proof. Openai tested the model for months and concluded that simple prompts are best. The author claims that complex prompts are best, but cites no evidence.
I find it surprising that you think documentation issues are “extraordinary”.
You have read literally any documentation before, right?
I'd just love to see one of the prompt/response pairs demonstrating the technique. Here is an example from my llm-consortium tool. If you want more, you can have them in 5 minutes. https://github.com/irthomasthomas/llm-consortium/blob/main/c...
I mean, OpenAI not only tested the model, they literally trained the model. Training a model involves developing evaluations for the model. It’s a gargantuan effort. I’m fairly certain that OpenAI is the authority on how to prompt o1.