Comment by dluan
5 hours ago
We have a massive poisoning of the commons catastrophe coming, driven by further authoritarian government overreach and control. I've seen no one working on this, and in fact most people on HN seem to be working on ways to further exacerbate this problem. I don't just mean half solutions like tor or social protocols that let you in and out of walled gardens.
There's still a tiny window of opportunity for engineers to come up with or design technical safeguards, but eventually this problem will move past the realm of what's easily solvable and out of our hands, and into policy makers hands. A big part of me feels like that window is already slammed shut.
It feels like "Autonomous Coding Agents" are being astroturfed on the daily on HN. The same arguments and tropes are echoing through every thread.
It's hard to distinguish who's a bot, who's a narrative pusher and who's an enthusiast. Which is exactly what you'd want from an astroturfing campaign. There's a clear benefit: people in the industry are reading this, and in doing so they're granting mindshare.
There's one way that can prevent inauthentic support campaigns - personal key signature. But judging by how afraid people, especially in the US, need to be of their government surveilling them, this isn't going to catch on.
What's interesting about that is that indeed, there are a lot of people pushing the 'autonomous coding agents are great' narrative but there is one crucial bit missing: they absolutely never show their code.
>It feels like "Autonomous Coding Agents" are being astroturfed on the daily on HN. The same arguments and tropes are echoing through every thread.
Isn't this what exactly you'd expect in a connected world? The best arguments from both sides proliferate, thereby causing "The same arguments and tropes are echoing through every thread".
> Isn't this what exactly you'd expect in a connected world?
I would expect a figurative war for human attention. With so much information being available, everyone would try to make people focus on what they want to communicate.
> The best arguments
Some of these tropes and arguments aren't really the best. There's a lot of rhetorical gotchas, e.g. "that's exactly what I'd expect from a human" when an automated solution isn't up to par.
> from both sides
The only real "side" is the one actively pushing for something. Everyone else isn't a camp - they're just random people.
2 replies →
Yes. I’ve also been asking every engineer I know what they’re doing with AI and there’s a lot of people doing a lot of different things, but it’s a deep mismatch with the online rhetoric.
This phenomenon appears to be incrementally coming for every single topic and public platform.
There's a lot of money wrapped up in people thinking a certain way: AI is useful. Work should be done in a corporate office. The American Dream is attainable. Recession is not coming. War is good. The world is dangerous. Others want to harm you. Lots of investment in astroturfing these themes because a population who believes them will more easily be separated from their money.
I feel the same way. Most people I've talked to are using AI for better search. I don't know anyone using it heavily to do their main job (writing code). I think a lot of the accounts bragging about how much they are doing with AI are bots.
1 reply →
It feels the same way on GitHub trending. I used to check it frequently to see what the hottest newest tech was and stay up to date. Now it's oversaturated by whatever the newest AI bubble is. It also doesn't help that MCP enabled products like OpenClaw star their own repo and artificially inflate their perceived value.
Interesting - claw faking the benchmarks .. they match well with openA ideologically .
I hate to sound like I’m turfing for cryptocurrencies, isn’t there like an identity solution there that the crypto nerds solved for to keep identity verification anonymous and surveillance proof?
Need to double check what is available, though I feel like that angle could work.
I’ve been wondering also if a simple lie & deception detection type system could be a useful angles. It’s complicated in practice; though the human intuition would say it’s figured this out millennia ago- I can’t tell you how many times my body has figured out someone’s toxic negative vibe by feeling. And I think we probably understand this better than we think and can represent it in the computer space with analysis of signals and some follow on questions. Hope I’m not too naive here.
If you can point me at someone that would fund such projects (not VCs), would be happy to apply. Projects like NLNet aren't keen on funding larger scope projects. At least if you do not have the thought leader influencer clout.
What are your ideas for this?
Decentralized platform, with traceable decisions, mixing direct web of trust, delegated, and community moderated content labeling. Content servers with pay to post submissions, to allow sustainable hosting and hosting delegated moderation.
I agree that it feels like the tiny window of opportunity hasn't quite shut yet, and it's a problem space I know I should take more interest in. What do you see as the viable technical directions? Something along the lines of what Altman was trying to do with his Orb? Something along the lines of the C2PA's Content Credentials?
[0] e.g. https://www.businessinsider.com/sam-altman-tools-for-humanit... and the feature piece at https://time.com/7288387/sam-altman-orb-tools-for-humanity/
[1] https://contentcredentials.org and https://c2pa.org
Instead of collecting biometric info from humans and IDing all of their online movements, you could mandate that LLM output be watermarked (using a technology that Scott Arronson was hired by OpenAI to help develop, after which the project was shut down under Altman right after proving that it could work) so that their online movement would be IDed. The implication in this story that it was shut down to keep the Orb around in principle (telling humans they had to be tagged to distinguish them from machines that could more easily be tagged) is very easy to pick up.
Is that viable given the proliferation of open-weight LLMs that don't apply that sort of watermarking? If somebody with malign intent can skip the attestation, presumably they will, right?
> I've seen no one working on this, and in fact most people on HN seem to be working on ways to further exacerbate this problem.
It's against the HN guidelines to insinuate that astroturfing happens on HN.
Discussion of astroturfing on a post that is specifically about astroturfing is such an obvious exception that I'm having a hard time taking your reply in good faith, but this is me trying to do so anyway instead of just downvoting and flagging like the guidelines suggest I do in such cases.
To quote The Cable Guy, there’s only one answer, someone has to kill the babysitter (tv, social media, Big Tech). It’s hard to kill the babysitter when everyone in Congress is invested balls deep in the babysitter. Eisenhower warned of the coming overreaching powers of the Military Industrial Complex, but no one is attacking the Government Stock Market Tech Complex (GSMTC).
It’s beyond that. It’s the CIA deeply embedded in all the scary uncomfortable ways you would have hoped never possible. Presidents win and turn their stance and run around in the other direction, they don’t what to be another assassinated Kennedy (and imo today they would have other fears worse than dying). Congressmen and women are definitely also aware of the deep presence and power of that agency and its perversion into American life and politics. They don’t want to be the ones to be the sacrificial pawn sparking an outright violent American revolution and tear down of the agency.
I was surveilled, experimented on and followed by them for being American-Pakistani and speaking out against them from 2022-2023. It was a scary time and I wish I were making this up. I wonder sometimes if they really are the good guys, and I just got things backwards. I also heard when you are kidnapped and in hostile territories for long enough, you fall in love with the kidnappers.
Happy to share more details if anyone’s curious.
so what you're saying is that the US government is an illegitimate regime and everyone can fully justify destroying it as an enemy of the people?
1 reply →
Its already here.
There were many disinformation research organizations in the US, including at major institutions such as Harvard and Stanford, that were forced to close by conservatives through lawfare or apparently through donor pressure.
(It's interesting that conservatives saw it as a partisan cause.)