Comment by saaaaaam
11 days ago
> Model performance has also been shown to be better if you lead with the question. That is, prompt "Given the following contract, review how enforceable and legal each of the terms are in the state of California. <contract>", not "<contract> How enforceable...".
Confused here. You attach the contract. So it’s not a case of leading with the question. The contract is presented in the chat, you ask the question.
LLMs are necessarily linear. If you paste the contract first, the attention mechanism of the model can still process the contract, but only generically. It pays attention to the key points of the contract. If you ask the question first, the attention part of the model is already primed. It will now read the contract paying more attention to the parts that are relevant to the question.
If I ask you to read Moby Dick and then ask you to critique the author's use of weather as a setting, that's a bit more difficult than if I ask you to to critique that aspect before asking you to read the book.
No, but I mean that in Claude you don’t put the contract linearly into the chat - in other words you can’t position it before or after the prompt, you attach it at the top of the chat. Are you saying you would prompt saying “please examine the contract I will provide in the next message, here is what I want you to do <instruction>”
The LLM developers already know this trick, so I expect that if you attach documents, they are processed after your prompt.
There is a further trick that is probably already integrated: simply giving the same input twice greatly improves model performance.
1 reply →