← Back to context

Comment by hodgehog11

2 days ago

This is a genuine concern, which is why it is a very hot topic of research. If you're giving a probabilistic program the potential to do something sinister, using a commercial model or something that you have not carefully finetuned yourself would be a terrible idea. The same principle applies for commercial binaries; without decompilation and thorough investigation, can you really trust what it's doing?

> The same principle applies for commercial binaries; without decompilation and thorough investigation, can you really trust what it's doing?

I have been using OSS libraries for the last 20+ years where there are signatures and one can actually check the source. I think the industry mostly moved away from using magic binaries?

With LLMs a lot of what you get out is based on the data they are trained on (black box), a dash of extra refinement and fine tuning (black box) and if it's not running on your device, maybe even shadow prompts. I dont want to build anything with LLMs for the same reason I don't want an Alexa in my home.

To me, GenAI slop is the ultimate dumbing down of society and the ultimate control mechanism. People don't wanna bother read 5 lines of text, bam, summarize it for them into a one liner. People don't wanna bother to learn to draw: bam, some shitty model generates a picture of a cat for you. People don't wanna learn some syntax and think about their codebase: bam, Claude to the rescue.

We are collectively dumbing down an already progressively dumb society using increasingly sophisticated tools atrophying our already atrophied brains (doomscrolling and brainrot didn't help).

If experience has shown anything, it's that when our reliance of these tools increases, thats when the people who control them will start forcing ads and change public opinion with them, along with continuous surveillance. Why are we doing this to ourselves, just so we can create a webapp 5 min faster that won't be maintainable in 2 months?