Comment by idle_zealot
20 days ago
It means the agent should try to be intentional, I think? The way ideas are phrased in prompts changes how LLMs respond, and equating the instructions to life itself might make it stick to them better?
20 days ago
It means the agent should try to be intentional, I think? The way ideas are phrased in prompts changes how LLMs respond, and equating the instructions to life itself might make it stick to them better?
I feel like you’re trying to assign meaning where none exists. This is why AI psychosis is a thing - LLMs are good at making you feel like they’re saying something profound when there really isn’t anything behind the curtain. It’s a language model, not a life form.