Comment by 6Az4Mj4D 7 months ago As I was reading that prompt, it looked like large blob of if else case statements 3 comments 6Az4Mj4D Reply refactor_master 7 months ago Maybe we can train a simpler model to come up with the correct if/else-statements for the prompt. Like a tug boat. otabdeveloper4 7 months ago Hobbyists (random dudes who use LLM models to roleplay locally) have already figured out how to "soft-prompt".This is when you use ML to optimize an embedding vector to serve as your system prompt instead of guessing and writing it out by hand like a caveman.Don't know why the big cloud LLM providers don't do this. MaxLeiter 7 months ago This is generally how prompt engineering works1. Start with a prompt2. Find some issues3. Prompt against those issues*4. Condense into a new prompt5. Go back to (1)* ideally add some evals too
refactor_master 7 months ago Maybe we can train a simpler model to come up with the correct if/else-statements for the prompt. Like a tug boat. otabdeveloper4 7 months ago Hobbyists (random dudes who use LLM models to roleplay locally) have already figured out how to "soft-prompt".This is when you use ML to optimize an embedding vector to serve as your system prompt instead of guessing and writing it out by hand like a caveman.Don't know why the big cloud LLM providers don't do this.
otabdeveloper4 7 months ago Hobbyists (random dudes who use LLM models to roleplay locally) have already figured out how to "soft-prompt".This is when you use ML to optimize an embedding vector to serve as your system prompt instead of guessing and writing it out by hand like a caveman.Don't know why the big cloud LLM providers don't do this.
MaxLeiter 7 months ago This is generally how prompt engineering works1. Start with a prompt2. Find some issues3. Prompt against those issues*4. Condense into a new prompt5. Go back to (1)* ideally add some evals too
Maybe we can train a simpler model to come up with the correct if/else-statements for the prompt. Like a tug boat.
Hobbyists (random dudes who use LLM models to roleplay locally) have already figured out how to "soft-prompt".
This is when you use ML to optimize an embedding vector to serve as your system prompt instead of guessing and writing it out by hand like a caveman.
Don't know why the big cloud LLM providers don't do this.
This is generally how prompt engineering works
1. Start with a prompt
2. Find some issues
3. Prompt against those issues*
4. Condense into a new prompt
5. Go back to (1)
* ideally add some evals too