← Back to context

Comment by satvikpendem

7 days ago

Sure, that's if human moderators see it before the AI, in which case, why have an AI at all? I presume in this solution that the AI is running all the time and it will see messages the instant they're sent and thus will always be vulnerable to a prompt injection attack before any human even sees it in the first place.

To moderate the majority of the community that will not be attempting prompt injections.

What meaningful vulnerabilities are there if the post can only be accepted/rejected/flaggedForHumanReview?

  • That's what you tell the AI to do, who knows what other systems it has access to? For example, where is it writing the flags for these posts? Can it access the file system and do something programmatically? Et cetera, et cetera.