Comment by simonw
2 months ago
I knew what I was doing. I don't know if I'd describe it as selfish so much as deliberately provocative.
I agree with Rob Pike that sending emails like that from unreviewed AI systems is extremely rude.
I don't agree that the entire generative AI ecosystem deserves all of those fuck yous.
So I hit back in a very subtle way by demonstrating a little-known but extremely effective application of generative AI - for digital forensics. I made sure anyone reading could follow along and see exactly what I did.
I think this post may be something of a Rorschach test. If you have strong negative feelings about generative AI you're likely to find what I did offensive. If you have favorable feelings towards generative AI you're more likely to appreciate my subtle dig.
So yes, it was a bit of a dick move. In the overall scheme of bad things humans do I don't feel like it's pretty far over the "this is bad" line.
> I don't agree that the entire generative AI ecosystem deserves all of those fuck yous.
Yes, I’ve noticed. You are frequently baffled that incredibly obvious and predictable things happen, like this or the misuse of “vibe coding” as a term.
That’s what makes your actions frustrating, your repeated glaring inability to understand the criticisms of the technology refering to the inevitable misuse, the lack of understanding that of course this is what it is going to be used for, and no amount of your blog posts is going to change it.
https://news.ycombinator.com/item?id=46398241
Your deliberate provocation didn’t accomplish good. Agreed, it was not by any means close to the worst things humans do, but it was still a public dick move (to borrow your words) which accomplished nothing.
One day, as will happen to most of us, you or someone close will be bitten hard by ignorant or malicious use outside your control. Perhaps then you’ll reflect on your role in it.
> One day, as will happen to most of us, you or someone close will be bitten hard by ignorant or malicious use outside your control.
Agreed. That's why I invest so much effort trying to help people understand the security risks that are endemic to how most of these systems work: https://simonwillison.net/tags/prompt-injection/
[flagged]
As someone from the Humanities who isn't in the tech industry, I'm just absolutely electrified by how strong the division this technology has generated in the social dynamic is. Like I mean second-order and third-order effects. Human beings aren't even always able to create a rift this deep. It's fascinating.
Useful idiots believe that these systems have "nascent emotions" or agency, or emergent behaviour [1]. So the will push for more and more idiotic uses of these systems. These behaviours need to be called out and shamed. And proper behaviours taught.
[1] Prime example in this very discussion: https://news.ycombinator.com/item?id=46395196