Comment by Kamillaova

14 hours ago

I genuinely believe that only such way (regarding "protecting children" from viewing "dangerous/unwanted content") is correct and maximally effective. All others mostly create a theater of security - in other words, they don't actually prevent direct access to "dangerous" content but merely create an illusion of doing so. This ranges from client-side-only checks (like Telegram in the UK) to "privacy-preserving" checks based on ZK or similar technologies, which are currently being promoted in the EU. The first can be bypassed simply by searching for workarounds; the second... well, one person could just verify thousands of others using their own documents, and that's it. Literally a security theater - I hate it, a lot.

And my opinion is that we shouldn't support such ways of doing this, meaning we shouldn't implement or comply with them, but rather protest against them. Either undermine their purpose or create a significant appearance of problems. In other words, either spread methods to bypass them, support such efforts in any way possible, or deny access to services (and so on) in jurisdictions where they're banned by inhumane laws. This is, in a way, a very common practice in the field of "copyright" and I sincerely hope it spreads to everyday matters.

It's deeply sad that nobody addresses the root problem - only its consequences, meaning they try to "hide unwanted content" instead of making it "non-unwanted." And it's even sadder that so few of those who could actually influence the implementation of such "protections" advocate this approach. Off the top of my head, I can only name Finland as one actively promoting educational programs and similar solutions to this problem.