Comment by dakolli
10 days ago
Just say it simply,
1. LLMs only serve to reduce the value of your labor to zero over time. They don't need to even be great tools, they just need to be perceived as "equally good" to engineers for C-Suite to lay everyone off, and rehire at 50-25% of previous wages, repeating this cycle over a decade.
2. LLMs will not allow you to join the billionaire class, that wouldn't make sense, as anyone could if that's the case. They erode the technical meritocracy these Tech CEOs worship on podcasts, and youtube, (makes you wonder what are they lying about). - Your original ideas and that Startup you think is going to save you, isn't going to be worth anything if someone with minimal skills can copy it.
3. People don't want to admit it, but heavy users of LLMs know they're losing something, and there's a deep down feeling that its not the right way to go about things. Its not dissimilar to any guilty dopaminergic crash one gets when taking shortcuts in life.
I used like 1.8bb Anthropic tokens last year, I won't be using it again, I won't be participating in this experiment. I've likely lost years of my life in "potential learning" from the social media experiment, I'm not doing that again. I want to study compilers this year, and I want to do it deeply. I wont be using LLMs.
You may be throwing the baby out with the bathwater. I learned more last year from ChatGPT Pro than I'd learned in the previous 5, FWIW.
Just say 'LLMs'. Whenever someone name drops a specific model I can't help but think it's just an Ad bot.
The "Pro" part is particularly suspect
Pro isn't even a model. If they actually used a model name I'd think they were just into LLMs. Chatgpt pro is a specific paid service.
1 reply →
Huh, thanks.
I've recently found LLMs to be an excellent learning tool, using it hand-in-hand with a textbook to learn digital signal processing. If the book doesn't explain something well, I ask the LLM to explain it. It's not all brain wasting.
Well said. I use it the same way. Sometimes, a technical book will assume that you know a concept or will even use an acronym that is not explained (but obvious) or is just plainly not very explicit. I also use it to directly test my knowledge of the subject (you have to be careful of the people-pleasing behavior, but in my experience they tend to gently tell you where you're wrong rather than lie to you). Same goes for hands-on books. Sometimes the example are not very interesting, or you have something of your own that you would like to try. As long as you use it carefully like this, it can be really transformative. I do agree that there is a potential risk of offloading too much thinking to it, but if you keep that in mind, I don't see the problem.
Exactly. LLMs are really just an extension of the internet. You can use the internet to expand on what you know or you can use the internet to rot your brain.
We have agency to decide and if the majority decide on brain rot I really don't care.
I have been learning things from the internet for 30 years and LLMs are just the greatest gift. If someone isn't leveraging these tools to increase what they know good luck.
I've said it simply, much like you, and it comes off as unhinged lunacy. Inviting them to learn themselves has been so much more successful than directed lectures, at least in my own experiments with discourse and teaching.
A lot of us have fallen into the many, many toxic traps of technology these past few decades. We know social media is deliberately engineered to be addictive (like cigarettes and tobacco products before it), we know AI hinders our learning process and shortens our attention spans (like excess sugar intake, or short-form content deluges), and we know that just because something is newer or faster does not mean it's automatically better.
You're on the right path, I think. I wish you good fortune and immense enjoyment in studying compilers.
I agree, you're probably right! Thanks!