Comment by GenerWork
1 day ago
I really don't like this. This will inevitable expand beyond child porn and terrorism, and it'll all be up to the whims of "AI safety" people, who are quickly turning into digital hall monitors.
1 day ago
I really don't like this. This will inevitable expand beyond child porn and terrorism, and it'll all be up to the whims of "AI safety" people, who are quickly turning into digital hall monitors.
I think those with a thirst for power have seen this a very long time ago, and this is bound to be a new battlefield for control.
It's one thing to massage the kind of data that a Google search shows, but interacting with an AI is a much more akin to talking to a co-worker/friend. This really is tantamount to controlling what and how people are allowed to think.
No, this is like allowing your co-worker/friend to leave the conversation.
Right but in this case your co-worker is an automaton and someone else who might well have a hidden agenda has tweaked your co-worker to leave conversations under specific circumstances.
The analogy then is that the third party is exerting control over what your co-worker is allowed to think.
3 replies →
I think you are probably confused about the general characteristics of the AI safety community. It is uncharitable to reduce their work to a demeaning catchphrase.
I’m sorry if this sounds paternalistic, but your comment strikes me as incredibly naïve. I suggest reading up about nuclear nonproliferation treaties, biotechnology agreements, and so on to get some grounding into how civilization-impacting technological developments can be handled in collaborative ways.
I have no doubt the "AI safety community" likes to present itself as noble people heroically fighting civilizational threats, which is a common trope (as well as the rogue AI hypothesis which increasingly looks like a huge stretch at best). But the reality is that they are becoming the main threat much faster than the AI. They decide on the ways to gatekeep the technology that starts being defining to the lives of people and entire societies, and use it to push the narratives. This definitely can be viewed as censorship and consent manufacturing. Who are they? In what exact ways do they represent interests of people other than themselves? How are they responsible? Is there a feedback loop making them stay in line with people's values and not their own? How is it enforced?
> This will inevitable expand beyond child porn and terrorism
This is not even a question. It always starts with "think about the children" and ends up in authoritarian stasi-style spying. There was not a single instance where it was not the case.
UK's Online Safety Act - "protect children" → age verification → digital ID for everyone
Australia's Assistance and Access Act - "stop pedophiles" → encryption backdoors
EARN IT Act in the US - "stop CSAM" → break end-to-end encryption
EU's Chat Control proposal - "detect child abuse" → scan all private messages
KOSA (Kids Online Safety Act) - "protect minors" → require ID verification and enable censorship
SESTA/FOSTA - "stop sex trafficking" → killed platforms that sex workers used for safety
This may be an unpopular opinion, but I want a government-issued digital ID with zero-knowledge proof for things like age verification. I worry about kids online, as well as my own safety and privacy.
I also want a government issued email, integrated with an OAuth provider, that allows me to quickly access banking, commerce, and government services. If I lose access for some reason, I should be able to go to the post office, show my ID, and reset my credentials.
There are obviously risks, but the government already has full access to my finances, health data (I’m Canadian), census records, and other personal information, and already issues all my identity documents. We have privacy laws and safeguards on all those things, so I really don’t understand the concerns apart from the risk of poor implementations.
> We have privacy laws and safeguards on all those things
Which have failed horrendously.
If you really just wanted to protect kids then make kid safe devices that automatically identify themselves as such when accessing websites/apps/etc, and then make them required for anyone underage.
Tying your whole digital identity and access into a single government controlled entity is just way too juicy of a target to not get abused.
5 replies →
> I want a government-issued digital ID with zero-knowledge proof for things like age verification
I absolutely do not want this, on the basis that making ID checks too easy will result in them being ubiquitous which sets the stage for human rights abuses down the road. I don't want the government to have easy ways to interfere in someone's day to day life beyond the absolute bare minimum.
> government issued email, integrated with an OAuth provider
I feel the same way, with the caveat that the protocol be encrypted and substantially resemble Matrix. This implies that resetting your credentials won't grant access to past messages.
3 replies →
That's the beauty of local LLMs. Today the governments already tell you that we've always been at war with eastasia and have the ISPs block sites that "disseminate propaganda" (e.g. stuff we don't like) and they surface our news (e.g. our state propaganda).
With age ID monitoring and censorship is even stronger and the line of defense is your own machine and network, which they'll also try to control and make illegal to use for non approved info, just like they don't allow "gun schematics" for 3d printers or money for 2d ones.
But maybe, more people will realize that they need control and get it back, through the use and defense of the right tools.
Fun times.
As soon as a local LLM that can match Claude Codes performance on decent laptop hardware drops, I'll bow out of using LLMs that are paid for.
I don't think that's a realistic expectation. Sure, we've made progress wrt smaller models being as capable as larger ones three years ago, but there's obviously a lower limit there.
What you should be waiting for, instead, is new affordable laptop hardware that is capable of running those large models locally.
But then again, perhaps a more viable approach is to have a beefy "AI server" in each household, with devices then connecting to it (E2E all the way, so no privacy issues).
It also makes me wonder if some kind of cryptographic trickery is possible to allow running inference in the cloud where both inputs and outputs are opaque to the owner of the hardware, so that they cannot spy on you. This is already the case to some extent if you're willing to rely on security by obscurity - it should be quite possible to take an existing LM and add some layers to it that basically decrypt the inputs and encrypt the outputs, with the key embedded in model weights (either explicitly or through training). Of course, that wouldn't prevent the hardware owner from just taking those weights and using them to decrypt your stuff - but that is only a viable attack vector when targeting a specific person, it doesn't scale to automated mass surveillance which is the more realistic problem we have to contend with.
What kinds of tools do you think are useful in getting control/agency back? Any specific recommendations?
[flagged]
Inevitable? That’s a guess. You know don’t know the future with certainty.
Did you read the post? This isn't about censorship, but about conversations that cause harm to the user. To me that sounds more like suggesting suicide, or causing a manic episode like this: https://www.nytimes.com/2025/08/08/technology/ai-chatbots-de...
... But besides that, I think Claude/OpenAI trying to prevent their product from producing or promoting CSAM is pretty damn important regardless of your opinion on censorship. Would you post a similar critical response if Youtube or Facebook announced plans to prevent CSAM?
Did you read the post? It explicitly states multiple times that it isn't about causing harm to the user.
If a person’s political philosophy seeks to maximize individual freedom over the short term, then that person should brace themselves for the actions of destructive lunatics. They deserve maximum freedoms too, right? /s
Even hard-core libertarians account for the public welfare.
Wise advocates of individual freedoms plan over long time horizons which requires decision-making under uncertainty.