Comment by Terr_
21 hours ago
> To illustrate this point, here's a simple demo of an AI email assistant that, if Gmail had shipped it, would actually save me a lot of time:
Glancing over this, I can't help thinking: "Almost none of this really requires all the work of inventing, training, and executing LLMs." There are much easier ways to match recipients or do broad topic-categories.
> You can think of the System Prompt as a function, the User Prompt as its input, and the model's response as its output:
IMO it's better to think of them as sequential paragraphs in a document, where the whole document is fed into an algorithm that tries to predict what else might follow them in a longer document.
So they're both inputs, they're just inputs which conflict with one-another, leading to a weirder final result.
> when an LLM agent is acting on my behalf I should be allowed to teach it how to do that by editing the System Prompt.
I agree that fixed prompts are terrible for making tools, since they're usually optimized for "makes a document that looks like a conversation that won't get us sued."
However even control over the system prompt won't save you from training data, which is not so easily secured or improved. For example, your final product could very well be discriminating against senders based on the ethnicity of their names or language dialects.
No comments yet
Contribute on Hacker News ↗