Comment by Our_Benefactors
19 hours ago
> You've used AI to do something that was a single command
Yes, and that’s a good thing! This is in fact where a lot of AI value lies. You dont need to know that command anymore - knowing the functional contract is now sufficient to perform the requisite work duties. This is huge!
Not even joking that the main benefit I've seen from "AI" for editing code is that it lets me quickly do all the things I could already have been doing just as quickly if I'd ever bothered to learn to use my tools.
Of course I lose about as much time as I save to its fuck-ups, so I'd still have been better off learning to actually use a text editor properly. Though (as I mentioned in a another post) part of why I've never done that in 25ish years of writing code for pay is that my code-writing speed has never been too slow for any of the businesses I've worked in, i.e. other things move slowly enough it never mattered.
Once I learn a command that is both repeatable and useful, I prefer to either keep it in my mind or in my aliases. Thank you.
Yes but companies would love for a robot to be able to do it instead.
You can still do this! And AI will teach you that command far far faster than synthesizing it yourself.
Yeah, I know AI is useful for that, that's why I said after I learn. Hopefully once.
That's what Skills are for
:^)
0 tokens per command >>> Hundreds of tokens per command
Is it? If the LLMs change broke something do you know enough to fix it?
The same question can be applied to work without AI, so this isn’t a meaningful criticism
In one case you are using tools you understand, in the other you aren't. Seem different to me.
I watched people ask LLMs for linting/refactoring help, burning easily 5 minutes for something that could be completed deterministically, locally, in ms using any modern editor.
Quite frankly it was embrassing. We've had tools for static analysis for ages. Use them.
Someone with better knowledge could work 100x faster using 100x fewer resources. They did it the slow, expensive way but at least didn't have to think? Odd flex.
Look, I feel for junior admins, I was one 35 years ago and the only reason I'm where I am today was because I had to learn the hard way, repeatedly and often.
I use the shit out of opencode to do things as a force multiplier, not as a way to keep me from knowing what its doing.
The point at which we're optimizing for "we don't need to know that anymore" is the point at which everything blows up, because agentic work is not fully deterministic, models hallucinate even simple things.
Blindly relying on your agent weapon of choice to just do the right thing because you didn't take the time to understand how the lego fits together is an actual problem.
Replace agent with 'direct report' and you've just described middle management. For better or worse, companies have always run on non deterministic tasks doled out by persons who barely understand the work.
Honestly human employees feel closer to deterministic than LLMs.
I have a pretty good sense of the quality of work my coworkers output, where they tend to struggle, where they're talented, what level of review is required, what I should double check, etc.
By contrast LLMs are more like picking a contractor out of a hat. Even with good guardrails the quality and types of issues vary wildly prompt to prompt.
> "You've used AI to do something that was a single command."
A coworker created a shared Claude Code skill in our repo.
It's obviously something that can be done as a python or bash+jq script and run deterministically.
Instead we use natural language and waste tokens for that.
> You dont need to know that command anymore
I find it hard to read "You can do things without knowing things" as a positive improvement in work, society, life, anywhere
You are the worst kind of gatekeeper, then! A true reactionary who believes they are righteous for impeding others!
I'm pretty sure you're being sarcastic. I hope so.
It's hard to tell anymore because I have encountered people who genuinely do seem to think that disliking AI is gatekeeping somehow
[dead]
It's also several hundred times more expensive.
False! Labor is the most expensive input in creating software, not joules of energy. Using AI is far far cheaper than expecting workers to synthesize all knowledge themselves.
Both workflows involve typing a very small number of characters and should take under 30 seconds. There's no difference in labor cost. However, the compute and energy costs of the tokens to solve the problem vs the tool call will be orders of magnitude in difference, even for trivial stuff like grepping. It gets worse as the problem gets more complicated and the tools more specialized.
I can't tell if this comment is sarcasm or not. If you let AI run commands you don't understand (especially in production) you may end up with some nasty surprises.
With a comment like that, it's no wonder you're dramatically below our minimum guidance for tokens consumed.
If AI breaks production this way, you just tell AI to fix it! And look, now you've consumed tokens twice. Think on that and I'll see you at the end-of-year performance review.
it reminds me of using something like gdp as a measure where spending more money == good even if its at the cost of actually less productivity (more middlemen taking out rents, for example)
It’s not sarcastic at all. Using AI to accomplish things and fill in your knowledge gaps is literally the whole point of it. People are downvoting and salty because they thought the value in the job was in memorizing esoteric APIs (it never was)