Comment by helloplanets
1 day ago
And when programming with agentic tools, you need to actively push for the idea to not regress to the most obvious/average version. The amount of effort you need to expend on pushing the idea that deviates from the 'norm' (because it's novel), is actually comparable to the effort it takes to type something out by hand. Just two completely different types of effort.
There's an upside to this sort of effort too, though. You actually need to make it crystal clear what your idea is and what it is not, because of the continuous pushback from the agentic programming tool. The moment you stop pushing back, is the moment the LLM rolls over your project and more than likely destroys what was unique about your thing in the first place.
You just described the burden of outsourcing programming.
Outsourcing development and vibe coding are incredibly similar processes.
If you just chuck ideas at the external coding team/tool you often get rubbish back.
If you're good at managing the requirements and defining things well you can achieve very good things with much less cost.
With the basic and enormous difference that the feedback loop is 100 or even 1000x faster. Which changes the type of game completely, although other issues will probably arise as we try this new path.
That embeds an assumption that the outsourced human workers are incapable of thought, and experience/create zero feedback loops of their own.
Frustrated rants about deliverables aside, I don't think that's the case.
12 replies →
100%! There is significant analogy between the two!
There is a reason management types are drawn to it like flies to shit.
1 reply →
YES!
AI assistance in programming is a service, not a tool. You are commissioning Anthropic, OpenAI, etc. to write the program for you.
Yes, but as with outsourcing those who are making such decisions often lack the awareness, or even skills, to properly specify the requirements and be able to evaluate the results.
We need a new word for on-premise offshoring.
On-shoring ;
If the on-premise offshoring centers around the use of LLMs then I suggest the term "off-braining." :)
> On-shoring
I thought "on-shoring" is already commonly used for the process that undos off-shoring.
4 replies →
Corporate has been using the term "best-shoring" for a couple of years now. To my best guess, it means "off-shoring or on-shoring, whichever of the two is cheaper".
Rubber-duckying... although a rubber ducky can't write code... infinite-monkeying?
1 reply →
NIH-shoring?
Ai-shoring.
Tech-shoring.
2 replies →
eshoring
We already have a perfect one
Slop;
Fair enough but I am a programmer because I like programming. If I wanted to be a product manager I could have made that transition with or without LLMs.
Agreed. The higher-ups at my company are, like most places, breathlessly talking about how AI has changed the profession - how we no longer need to code, but merely describe the desired outcome. They say this as though it’s a good thing.
They’re destroying the only thing I like about my job - figuring problems out. I have a fundamental impedance mismatch with my company’s desires, because if someone hands me a weird problem, I will happily spend all day or longer on that problem. Think, hypothesize, test, iterate. When I’m done, I write it up in great detail so others can learn. Generally, this is well-received by the engineer who handed the problem to me, but I suspect it’s mostly because I solved their problem, not because they enjoyed reading the accompanying document.
FWIW, when a problem truly is weird, AI & vibe coding tends to not be able to solve it. Maybe you can use AI to help you spend more time working on the weird problems.
When I play sudoku with an app, I like to turn on auto-fill numbers, and auto-erase numbers, and highlighting of the current number. This is so that I can go directly to the crux of the puzzle and work on that. It helps me practice working on the hard part without having to slog through the stuff I know how to do, and generally speaking it helps me do harder puzzles than I was doing before. BTW, I’ve only found one good app so far that does this really well.
With AI it’s easier to see there are a lot of problems that I don’t know how to solve, but others do. The question is whether it’s wasteful to spend time independently solving that problem. Personally I think it’s good for me to do it, and bad for my employer (at least in the short term). But I can completely understand the desire for higher-ups to get rid of 90% of wheel re-invention, and I do think many programmers spend a lot of time doing exactly that; independently solving problems that have already been solved.
2 replies →
Though it is not like management roles have ever appreciated the creative aspects of the job, including problem solving. Management has always wished to just describe the desired outcome and get magic back. They don't like acknowledging that problems and complications exist in the first place. Management likes to think that they are the true creatives for company vision and don't like software developers finding solutions bottom up. Management likes to have a single "architect" and maybe a single "designer" for the creative side that they like and are a "rising" political force (in either the Peter Principle or Gervais Principle senses) rather than deal with a committee of creative people. It's easier for them to pretend software developers are blue collar cogs in the system rather than white collar problem solvers with complex creative specialties. LLMs are only accelerating those mechanics and beliefs.
2 replies →
They’re destroying the only thing I like about my job - figuring problems out.
So, tackle other problems. You can now do things you couldn't even have contemplated before. You've been handed a near-godlike power, and all you can do is complain about it?
1 reply →
I’m a programmer (well half my job) because I was a short (still short) fat (I got better) kid with a computer in the 80s.
Now, the only reason I code and have been since the week I graduated from college was to support my insatiable addictions to food and shelter.
While I like seeing my ideas come to fruition, over the last decade my ideas were a lot larger than I could reasonably do over 40 hours without having other people working on projects I lead. Until the last year and a half where I could do it myself using LLMs.
Seeing my carefully designed spec that includes all of the cloud architecture get done in a couple of days - with my hands on the wheel - that would have taken at least a week with me doing some work while juggling dealing with a couple of other people - is life changing
Not sure why this is getting downvoted, but you're right — being able to crank out ideas on our own is the "killer app" of AI so to speak.
Granted, you would learn a lot more if you had pieced your ideas together manually, but it all depends on your own priorities. The difference is, you're not stuck cleaning up after someone else's bad AI code. That's the side to the AI coin that I think a lot of tech workers are struggling with, eventually leading to rampant burnout.
1 reply →
I became an auto mechanic because I love machining heads, and dropping oil pans to inspect, and fitting crankshafts in just right, and checking fuel filters, and adjusting alternators.
If I wanted to work on electric power systems I would have become an electrician.
(The transition is happening.)
I can't help but imagine training horses vs training cats. One of them is rewarding, a pleasure, beautiful to see, the other is frustrating, leaves you with a lot of scratches and ultimately both of you "agreeing" on a marginal compromise.
Right now vibe coding is more like training cats. You are constantly pushing against the model's tendency to produce its default outputs regardless of your directions. When those default outputs are what you want - which they are in many simple cases of effectively English-to-code translation with memorized lookup - it's great. When they are not, you might as well write the code yourself and at least be able to understand the code you've generated.
Yup - I've related it to working with Juniors, often smart and have good understandings and "book knowledge" of many of the languages and tools involved, but you often have to step back and correct things regularly - normally around local details and project specifics. But then the "junior" you work with every day changes, so you have to start again from scratch.
I think there needs to be a sea change in the current LLM tech to make that no longer the case - either massively increased context sizes, so they can contain near a career worth of learning (without the tendency to start ignoring that context, as the larger end of the current still-way-too-small-for-this context windows available today), or even allow continuous training passes to allow direct integration of these "learnings" into the weights themselves - which might be theoretically possible today, but is many orders of magnitude higher in compute requirements than available today even if you ignore cost.
1 reply →
I've never seen horse that scratches you.
This is why people thinkless of artists like Damien Hirst and Jeff Koons because their hands have never once touched the art. They have no connection to the effort. To the process. To the trail and error. To the suffer. They’ve out sourced it, monetized it, and make it as efficient as possible. It’s also soulless.
To me it feels a bit like literate programming, it forces you to form a much more accurate idea of your project before your start. Not a bad thing, but can be wasteful also when eventually you realise after the fact that the idea was actually not that good :)
Yeah, it's why I don't like trying to write up a comprehensive design before coding in the first place. You don't know what you've gotten wrong until the rubber meets the road. I try to get a prototype/v1 of whatever I'm working on going as soon as possible, so I can root out those problems as early as possible. And of course, that's on top of the "you don't really know what you're building until you start building it" problem.
> need to make it crystal clear
That's not an upside in that it's unique to LLM vs human written code. When writing it yourself, you also need to make it crystal clear. You do that in the language of implementation.
And programming languages are designed for clarifying the implementation details of abstract processes; while human language is this undocumented, half grandfathered in, half adversarially designed instrument for making apes get along (as in, move in the same general direction) without excessive stench.
The humane and the machinic need to meet halfway - any computing endeavor involves not only specifying something clearly enough for a computer to execute it, but also communicating to humans how to benefit from the process thus specified. And that's the proper domain not only of software engineering, but the set of related disciplines (such as the various non-coding roles you'd have in a project team - if you have any luck, that is).
But considering the incentive misalignments which easily come to dominate in this space even when multiple supposedly conscious humans are ostensibly keeping their eyes on the ball, no matter how good the language machines get at doing the job of any of those roles, I will still intuitively mistrust them exactly as I mistrust any human or organization with responsibly wielding the kind of pre-LLM power required for coordinating humans well enough to produce industrial-scale LLMs in the first place.
What's said upthread about the wordbox continually trying to revert you to the mean as you're trying to prod it with the cowtool of English into outputting something novel, rings very true to me. It's not an LLM-specific selection pressure, but one that LLMs are very likely to have 10x-1000xed as the culmination of a multigenerational gambit of sorts; one whose outset I'd place with the ever-improving immersive simulations that got the GPU supply chain going.
I think harder while using agents, just not about the same things. Just because we all got a super powers doesn't make the problems go away, they just move and we still have our full brains to solve them.
It isn't all great, skills that feel important have already started atrophying, but other skills have been strengthened. The hardest part is in being able to pace onself as well as figuring out how to start cracking certain problems.
Uniqueness is not the aim. Who cares if something is uniquely bad? But in any case, yes, if you use LLMs uncritically, as a substitute for reasoning, then you obviously aren't doing any reasoning and your brain will atrophy.
But it is also true that most programming tedious and hardly enriching for the mind. In those cases, LLMs can be a benefit. When you have identified the pattern or principle behind a tedious change, an LLM can work like a junior assistant, allowing you to focus on the essentials. You still need to issue detailed and clear instructions, you still need to verify the work.
Of course, the utility of LLMs is a signal that either the industry is bad at abstracting, or that there's some practical limit.
Yet another example of "comments that are only sort of true because high temperature sampling isn't allowed".
If you use LLMs at very high temperature with samplers which correctly keep your writing coherent (i.e. Min_p, or better like top-h, P-less decoding, etc), than "regression to the mean" literally DOES NOT HAPPEN!!!!
Have you actually tried high temperature values for coding? Because I don’t think it’s going to do what you claim it will.
LLMs don’t “reason” the same way humans do. They follow text predictions based on statistical relevance. So raising the temperature will more likely increase the likelihood of unexecutable pseudocode than it would create a valid but more esoteric implementation of a problem.
To put it another way, a high-temperature mad-libs machine will write a very unusual story, but that isn't necessarily the same as a clever story.
2 replies →
High temperature seems fine for my coding uses on GPT5.2.
Code that fails to execute or compile is the default expectation for me. That's why we feed compile and runtime errors back into the model after it proposes something each time.
I'd much rather the code sometimes not work than to get stuck in infinite tool calling loops.
How do you configure LLM température in coding agents, e.g. opencode?
https://opencode.ai/docs/agents/#temperature
set it in your opencode.json
1 reply →
You can't without hacking it! That's my point! The only places you can easily are via the API directly, or "coomer" frontends like SillyTavern, Oobabooga, etc.
Same problem with image generation (lack of support for different SDE solvers, the image version of LLM sampling) but they have different "coomer" tools, i.e. ComfyUI or Automatic1111
2 replies →