← Back to context Comment by solumunus 2 days ago And do you think the severity of the issue is anywhere near the same? 2 comments solumunus Reply zelphirkalt 2 days ago I think this will remain to be seen. Wasn't there a paper linked here on HN recently, that claimed, that even few examples are sufficient, to poison LLMs? (I didn't read that paper, and merely interpreted the meaning of the title.) solumunus 12 hours ago I don't think it remains to be seen. I think it's obvious that the completely explicit exploit is going to be more effective.
zelphirkalt 2 days ago I think this will remain to be seen. Wasn't there a paper linked here on HN recently, that claimed, that even few examples are sufficient, to poison LLMs? (I didn't read that paper, and merely interpreted the meaning of the title.) solumunus 12 hours ago I don't think it remains to be seen. I think it's obvious that the completely explicit exploit is going to be more effective.
solumunus 12 hours ago I don't think it remains to be seen. I think it's obvious that the completely explicit exploit is going to be more effective.
I think this will remain to be seen. Wasn't there a paper linked here on HN recently, that claimed, that even few examples are sufficient, to poison LLMs? (I didn't read that paper, and merely interpreted the meaning of the title.)
I don't think it remains to be seen. I think it's obvious that the completely explicit exploit is going to be more effective.