Comment by pyrale
1 day ago
On one hand, I agree with you that there is some fun in experimenting with silly stuff. On the other hand...
> Claude was trying to promote the startup on Hackernews without my sign off. [...] Then I posted its stuff to Hacker News and Reddit.
...I have the feeling that this kind of fun experiments is just setting up an automated firehose of shit to spray places where fellow humans congregate. And I have the feeling that it has stopped being fun a while ago for the fellow humans being sprayed.
This is an excellent point that will immediately go off-topic for this thread. We are, I believe, committed, into a mire of CG content enveloping the internet. I believe we will go through a period where internet communications (like HN, Reddit, and pages indexed by search engines) in unviable. Life will go on; we will just be offline more. Then, the defense systems will be up to snuff, and we will find a stable balance.
I hope you're right. I don't think you will be, AI will be too good at impersonating humans.
"we will just be offline more"
I think it will be quite some time into the future, before AI can impersonate humans in real life. Neither hardware, nor software is there, maybe something to fool humans for a first glance maybe, but nothing that would be convincing for a real interaction.
My theory (and hope) is the rise of a web of trust system.
Implemented so that if a person in your web vouches for a specific url (“this is made by a human”) you can see it in your browser.
If your solution to this problem is the web of trust, to be blunt, you don't have a solution. I am techie whose social circle is mostly other techies, and I know precisely zero people who have ever used PGP keys or any other WoT-based system, despite 30 years of evangelism. It's just not a thing anybody wants.
1 reply →
"Web of Trust" has been the proposed answer for, what, 30 years now? But everyone is too lazy to implement and abide by it.
1 reply →
Indeed. I worry though. We need those defense systems ASAP. The misinformation and garbage engulfing the internet does real damage. We can't just tune it out and wait for it to get better.
I definitely understand the concern - I don't think I'd have hung out on HN for so long if LLM generated postings were common. I definitely recognize this is something you don't want to see happening at scale.
But I still can't help but grin at the thought that the bot knows that the thing to do when you've got a startup is to go put it on HN. It's almost... cute? If you give AI a VPS, of course it will eventually want to post its work on HN.
It's like when you catch your kid listening to Pink Floyd or something, and you have that little moment of triumph - "yes, he's learned something from me!"
(author here) I did feel kinda bad about it as I've always been a 'good' HNer until that point but honestly it didn't feel that spammy to me compared to some human generated slop I see posted here, and as expected it wasn't high quality enough to get any attention so 99% of people would never have seen it.
I think the processes etc that HN have in place to deal with human-generated slop are more than adequate to deal with an influx of AI generated slop, and if something gets through then maybe it means it was good enough and it doesn't matter?
That kind of attitude is exactly why we're all about to get overwhelmed by the worst slop any of us could ever have imagined.
The bar is not 'oh well, it's not as bad as some, and I think maybe it's fine.'
well, he was arguing that it's not worse than 99% of the human slop that gets posted, so where do you draw the line?
* well crafted, human only? * Well crafted, whether human or AI? * Poorly crafted, human * well crafted, AI only * Poorly crafted, AI only * Just junk?
etc.
I think people will intuitively get a feel for when content is only AI generated. If people spend time writing a prompt that doesn't make it so wordy, and has personality, and it OK, then fine.
Also, big opportunity going to be out there for AI detected content, whether in forums, coming in inmail inboxes, on your corp file share, etc...
Did you?
Spoiler: no he didn't.
But the article is interesting...
It really highlights to me the pickle we are in with AI: because we are at a historical maximum already of "worse is better" with Javascript, and the last two decades have put out a LOT of javascript, AI will work best with....
Javascript.
Now MAYBE better AI models will be able to equivalently translate Javascript to "better" languages, and MAYBE AI coding will migrate "good" libraries in obscure languages to other "better" languages...
But I don't think so. It's going to be soooo much Javascript slop for the next ten years.
I HOPE that large language models, being language models, will figure out language translation/equivalency and enable porting and movement of good concepts between programming models... but that is clearly not what is being invested in.
What's being invested in is slop generation, because the prototype sells the product.
I'm not a fan of this option, but it seems to me the only way forward for online interaction is very strong identification on any place where you can post anything.
Back in FidoNet days, some BBSs required identification papers for registering and only allowed real names to be used. Though not known for their level headed discussions, it definitely added a certain level of care in online interactions. I remember the shock seeing the anonymity Internet provided later, both positive and negative. I wouldn't be surprised if we revert to some central authentication mechanism which has some basic level of checks combined with some anonymity guarantees. For example, a government owned ID service, which creates a new user ID per website, so the website doesn't know you, but once they blacklist that one-off ID, you cannot get a new one.
Honestly, having seen how it can be used against you, retroactively, I would never ever engage in a discussion under my real name.
(The fact that someone could correlate posts[0] based on writing style, as previously demonstrated on HN and used to doxx some people, makes things even more convoluted - you should think twice what you write and where.)
[0] https://news.ycombinator.com/item?id=33755016
id.me?
Not government owned, but even irs.gov uses it
Smaller communities too.
I grew up in... slightly rural america in the 80s-90s, we had probably a couple of dozen local BBSes the community was small enough that after a bit I just knew who everyone was OR could find out very easily.
When the internet came along in the early 90s and I started mudding and hanging out in newsgroups I liked them small where I could get to know most of the userbase, or at least most of the posing userbase. Once mega 'somewhat-anonymous' (i.e. posts tied to a username, not like 4chan madness) communities like slashdot, huge forums, etc started popping up and now with even more mega-communities like twitter and reddit. We lost something, you can now throw bombs without consequence.
I now spend most of my online time in a custom built forum with ~200 people in it that we started building in an invite only way. It's 'internally public' information who invited who. It's much easier to have a civil conversation there, though we still do get the occasional flame-out. Having a stable identity even if it's not tied to a government name is valuable for a thriving and healthy community.
1 reply →
That can be automated away too.
People will be more than willing to say, "Claude, impersonate me and act on my behalf".
I do this every time I find myself typing something I could get written up over or even fired for.
1. I'm usually too emotional to write out why I feel that way instead of saying what I feel.
2. I really don't like the person (or their idea) but I don't want to get fired over it.
Claude is really great at this: "Other person said X, I think it is stupid and they're a moron for suggesting this. Explain to them why this is a terrible idea or tell me I'm being an idiot."
Sometimes it tells me I'm being an idiot, sometimes it gives me nearly copy-pasta text that I can use and agree with.
> People will be more than willing to say, "Claude, impersonate me and act on my behalf".
I'm now imagining a future where actual people's identities are blacklisted just like some IP addresses are dead to email, and a market develops for people to sell their identity to spammers.
1 reply →
That's fine, because once someone is banned, the impersonations are also banned.
I mean, that's fine I guess as long as its respectable and respects the forum.
"Claude write a summary of the word doc I wrote about x and post it as a reply comment," is fine. I dont see why it wouldnt be. Its a good faith effort to post.
"Claude, post every 10 seconds to reddit to spam people to believe my politics is correct," isn't but that's not the case. Its not a good faith effort.
The moderation rules for 'human slop' will apply to AI too. Try spamming a well moderated reddit and see how far you get, human or AI.
1 reply →
See also: https://news.ycombinator.com/item?id=44860174 (posted 12 hours ago)
it's annoying but it'll be corrected by proper moderation on these forums
as an aside i've made it clear that just posting AI-written emoji slop PR review descriptions and letting claude code directly commit without self reviewing is unacceptable at work
The Internet is already 99% shit and always has been. This doesn't change anything.
It's gotten much worse. Before it was shit from people, now it's corporate shit. Corporate shit is so much worse.
I mean I can spam HN right now with a script.
Forums like HN, reddit, etc will need to do a better job detecting this stuff, moderator staffing will need to be upped, AI resistant captchas need to be developed, etc.
Spam will always be here in some form, and its always an arms race. That doesnt really change anything. Its always been this way.