Comment by otabdeveloper4
7 months ago
Hobbyists (random dudes who use LLM models to roleplay locally) have already figured out how to "soft-prompt".
This is when you use ML to optimize an embedding vector to serve as your system prompt instead of guessing and writing it out by hand like a caveman.
Don't know why the big cloud LLM providers don't do this.
No comments yet
Contribute on Hacker News ↗