Comment by saaaaaam

23 days ago

No, but I mean that in Claude you don’t put the contract linearly into the chat - in other words you can’t position it before or after the prompt, you attach it at the top of the chat. Are you saying you would prompt saying “please examine the contract I will provide in the next message, here is what I want you to do <instruction>”

The LLM developers already know this trick, so I expect that if you attach documents, they are processed after your prompt.

There is a further trick that is probably already integrated: simply giving the same input twice greatly improves model performance.

  • Gotcha, thanks for explaining. It’s interesting because there are times I say “look at his document and do this” but forget to attach the doc. I’ve always had the sense Claude is better “prepped” when it anticipates he document coming. Sometimes I’ve said “in the next message I’m going to give you a document, here is what I want you to do, if there’s anything you are unclear in before we begin ask me questions”. This seems to bring better results, but I’ve not done any sort of robust test.