Comment by forgingahead
7 months ago
System prompts are fine and all, but how useful is it really when LLMs clearly ignore prompt instructions randomly? I've had this with all the different LLMs, explicitly asking it to not do something works maybe 85-90% of the time. Sometimes they just seem "overloaded", even in a fresh chat session, so like a human would, they get confused and drop random instructions.
No comments yet
Contribute on Hacker News ↗