Comment by monkeynotes
2 years ago
What are you getting at when you say "secretive" injections? Isn't this stuff basically how any AI business shapes their public GPTs? I don't even know what a LLM looks like without the primary prompts tuning it's attitude and biases. Can you run a GPT responsibly without making some discretional directions for it? And being secretive, isn't that reasonable for a corporation - how they tune their LLM is surely valuable IP.
And this is just addressing corporations, people running their own LLMs are the bigger problem. They have zero accountability and almost the same tools as the big players.
I must be misunderstanding what these prompts are used for.
No comments yet
Contribute on Hacker News ↗