← Back to context

Comment by jmcdonald-ut

16 hours ago

The article links out to OpenAI's advice on prompting, but it also claims:

    OpenAI does publish advice on prompting o1, 
    but we find it incomplete, and in a sense you can
    view this article as a “Missing Manual” to lived
    experience using o1 and o1 pro in practice.

To that end, the article does seem to contradict some of the advice OpenAI gives. E.g., the article recommends stuffing the model with as much context as possible... while OpenAI's docs note to include only the most relevant information to prevent the model from overcomplicating its response.

I haven't used o1 enough to have my own opinion.

Those are contradictory. Openai claim that you don't need a manual, since O1 performs best with simple prompts. The author claims it performs better with more complex prompts, but provides no evidence.

  • In case you missed it

        OpenAI does publish advice on prompting o1, 
        but we find it incomplete, and in a sense you can
        view this article as a “Missing Manual” to lived
        experience using o1 and o1 pro in practice.
    
    

    The last line is important

    • But extraordinary claims require extraordinary proof. Openai tested the model for months and concluded that simple prompts are best. The author claims that complex prompts are best, but cites no evidence.

      3 replies →