Comment by ChrisMarshallNY
6 days ago
> I believe that ineffectual as it was, the reputational attack on me would be effective today against the right person. Another generation or two down the line, it will be a serious threat against our social order.
Damn straight.
Remember that every time we query an LLM, we're giving it ammo.
It won't take long for LLMs to have very intimate dossiers on every user, and I'm wondering what kinds of firewalls will be in place to keep one agent from accessing dossiers held by other agents.
Kompromat people must be having wet dreams over this.
You don't think the targeted phone/tv ads aren't suspiciously relevant to something you just said aloud to your spouse?
BigTech already has your next bowel movement dialled in.
I have always been dubious of this because:
Someone would have noticed if all the phones on their network started streaming audio whenever a conversation happened.
It would be really expensive to send, transcribe and then analyze every single human on earth. Even if you were able to do it for insanely cheap ($0.02/hr) every device is gonna be sending hours of talking per day. Then you have to somehow identify "who" is talking because TV and strangers and everything else is getting sent, so you would need specific transcribers trained for each human that can identify not just that the word "coca-cola" was said, but that it was said by a specific person.
So yeah if you managed to train specific transcribers that can identify their unique users output and then you were willing to spend the ~0.10 per person to transcribe all the audio they produce for the day you could potentially listen to and then run some kind of processing over what they say. I suppose it is possible but I don't think it would be worth it.
Google literally just settled for $68m about this very issue https://www.theguardian.com/technology/2026/jan/26/google-pr...
> Google agreed to pay $68m to settle a lawsuit claiming that its voice-activated assistant spied inappropriately on smartphone users, violating their privacy.
Apple as well https://www.theguardian.com/technology/2025/jan/03/apple-sir...
16 replies →
> Someone would have noticed if all the phones on their network started streaming audio whenever a conversation happened.
You don't have to stream the audio. You can transcribe it locally. And it doesn't have to be 100% accurate. As for user identify, people have mentioned it on their phones which almost always have a one-to-one relationship between user and phone, and their smart devices, which are designed to do this sort of distinguishing.
6 replies →
I have a weird and unscientific test, and at the very least it is a great potential prank.
At one point I had the misfortune to be the target audience for a particular stomach churning ear wax removal add.
I felt that suffering shared is suffering halved, so decided to test this in a park with 2 friends. They pulled out their phones (an Android and a IPhone) and I proceeded to talk about ear wax removal loudly over them.
Sure enough, a day later one of them calls me up, aghast, annoyed and repelled by the add which came up.
This was years ago, and in the UK, so the add may no longer play.
However, more recently I saw an ad for a reusable ear cleaner. (I have no idea why I am plagued by these ads. My ears are fortunately fine. That said, if life gives you lemons)
4 replies →
who says you need to transcribe everything you hear? You just need to monitor for certain high-value keywords. 'OK, Google' isnt the only thing a phone is capable of listening for.
Are you just surrendering?
Which makes the odd HN AI booster excitement about LLMs as therapists simultaneously hilarious and disturbing. There are no controls for AI companies using divulged information. Theres also no regulation around the custodial control of that information either.
The big AI companies have not really demonstrated any interest in ethic or morality. Which means anything they can use against someone will eventually be used against them.
> HN AI booster excitement about LLMs as therapists simultaneously hilarious and disturbing
> The big AI companies have not really demonstrated any interest in ethic or morality.
You're right, but it tracks that the boosters are on board. The previous generation of golden child tech giants weren't interested in ethics or morality either.
One might be mislead by the fact people at those companies did engage in topics of morality, but it was ragebait wedge issues and largely orthogonal to their employers' business. The executive suite couldn't have designed a better distraction to make them overlook the unscrupulous work they were getting paid to do.
> The previous generation of golden child tech giants weren't interested in ethics or morality either.
The CEOs of pets.com or Beanz weren't creating dystopian panopticons. So they may or may not have had moral or ethical failings but they also weren't gleefully buildings a torment nexus. The blast radius of their failures was less damaging to civilized society much more limited than the eventual implosion of the AI bubble.
1 reply →
Blackmail is losing value, not gaining; it's simply becoming too easy to plausibly disregard something real as AI-generated, and so more people are becoming less sensitive to it.
"Ok Tim, I've send a picture of you with your "cohorts" to a selected bunch that are called "distant family". I've also forwarded a soundbite of you called aunt sam a whore for leaving uncle bob.
I can stop anytime if you simply transfer .1 BTC to this address.
I'll follow up later if nothing is transferred there. "
To be honest, we have too many people that can't handle anything digital. The world will suffer sadly.
How is this better then? It drowns out real signal in noise.
[dead]
In the glorious future, there will be so much slop that it will be difficult to distinguish fact from fiction, and kompromat will lose its bite.
Said kompromat is already useless as most of it directly implicating the current US top chiefs is out in the open and... has no effect.
You can always tell the facts because they come in the glossiest packaging. That more or less works today, and the packaging is only going to get glossier.
Im not sure, metadata is metadata. There are traces for when where what came from
And it's pretty much all spoofable.
Interesting that when Grok was targeting and denuding women, engineers here said nothing, or were just chuckling about "how people don't understand the true purpose of AI"
And now that they themselves are targeted, suddenly they understand why it's a bad thing "to give LLMs ammo"...
Perhaps there is a lesson in empathy to learn? And to start to realize the real impact all this "tech" has on society?
People like Simon Wilinson which seem to have a hard time realizing why most people despise AI will perhaps start to understand that too, with such scenarios, who knows
It's the same how HN mostly reacts with "don't censor AI!" when chat bots dare to add parental controls after they talk teenagers into suicide.
The community is often very selfish and opportunist. I learned that the role of engineers in society is to build tools for others to live their lives better; we provide the substrate on which culture and civilization take place. We should take more responsibility for it and take care of it better, and do far more soul-seeking.
Talking to a chatbot yourself is much different from another person spinning up a (potentially malicious) AI agent and giving it permissions to make PRs and publish blogs. This tracks with the general ethos of self-responsibility that is semi-common on HN.
If the author had configured and launched the AI agent himself we would think it was a funny story of someone misusing a tool.
The author notes in the article that he wants to see the `soul.md` file, probably because if the agent was configured to publish malicious blog posts then he wouldn't really have an issue with the agent, but with the person who created it.
Parental controls and settings in general are fine, I don't want Amodei or any other of those freaks trying to be my dad and censoring everything. At least Grok doesn't censor as heavily as the others and pretend to be holier than thou.
> suddenly they understand why it's a bad thing "to give LLMs ammo"
Be careful what you imply.
It's all bad, to me. I tend to hang with a lot of folks that have suffered quite a bit of harm, from many places. I'm keenly aware of the downsides, and it has been the case for far longer than AI was a broken rubber on the drug store shelf.
Software engineers (US based particularly) were more than happy about software eating the economy when it meant they'd make 10x the yearly salary of someone doing almost any other job; now that AI is eating software it's the end of the world.
Just saying, what you're describing is entirely unsurprising.
I hate when people say this. SOME engineers didn't care, a lot of us did. There's a lot of "engineers getting a taste of their own medicine" sentiment going around when most of us just like an intellectual job where we get to build stuff. The "disrupt everything no matter the consequences" psychos have always been a minority and I think a lot of devs are sick of those people.
Also 10x salary?! Apparently I missed the gravy train. I think you're throwing a big class of people under the bus because of your perception of a non representative sample
1 reply →
Not so different from the way people used web search.