Comment by scotty79
7 months ago
I think prompt tuning might be worth doing for specific tasks in agentic workflows. For general prompts using words instead of fine tuned input vectors might be good enough. It also easier to update.
The fact that the model leaks some wordy prompt doesn't mean it's actual prompt aren't finetuned emeddings. It wouldn't have a way to leak those using just output tokens and since you start finetuning from a text prompt it would most likely return this text or something close.
No comments yet
Contribute on Hacker News ↗