Comment by noisy_boy
3 days ago
It is 80/20 again - it gets you 80% of the way in 20% of the time and then you spend 80% of the time to get the rest of the 20% done. And since it always feels like it is almost there, sunk-cost fallacy comes into play as well and you just don't want to give up.
I think an approach that I tried recently is to use it as a friction remover instead of a solution provider. I do the programming but use it to remove pebbles such as that small bit of syntax I forgot, basically to keep up the velocity. However, I don't look at the wholesale code it offers. I think keeping the active thinking cap on results in code I actually understand while avoiding skill atrophy.
well we used to have a sort of inverse pareto where 80% of the work took 80% of the effort and the remaining 20% of the work also took 80% of the effort.
I do think you're onto something with getting pebbles out of the road inasmuch as once I know what I need to do AI coding makes the doing much faster. Just yesterday I was playing around with removing things from a List object using the Java streams API and I kept running into ConcurrentOperationsExceptions, which happen when multiple threads are mutating the list object at the same time because no thread can guarantee it has the latest copy of the list unaltered by other threads. I spent about an hour trying to write a method that deep copies the list, makes the change and then returns the copy and running into all sorts of problems til I asked AI to build me a thread-safe list mutation method and it was like "Sure, this is how I'd do it but also the API you're working with already has a method that just....does this." Cases like this are where AI is supremely useful - intricate but well-defined problems.
> once I know what I need to do AI coding makes the doing much faster
Most commenters on this paper seem to not respond to the strongest result from it. That is, the developers wrongly thought and felt that using AI had sped up their work. So we need to be super cautious about what we think we know.
Code reuse at scale: 80 + 80 = 160% ~ phi...coincidence?
I think this may become a long horizon harvest for the rigorous OOP strategy, may Bill Joy be disproved.
Gray goo may not [taste] like steel-cut oatmeal.
1.6x multiplier is low, we usually need to apply 5x
It's often said that π is the factor by which one should multiply all estimates – reducing it to ɸ would be a significant improvement in estimation accuracy!
I think it’s most useful when you basically need Stack Overflow on steroids: I basically know what I want to do but I’m not sure how to achieve it using this environment. It can also be helpful for debugging and rubber ducking generally.
> rubber ducking
i don't mean to pick on your usage of this specifically, but i think it's noteworthy that the colloquial definition of "rubber ducking" seems to have expanded to include "using a software tool to generate advice/confirm hunches". I always understood the term to mean a personal process of talking through a problem out loud in order to methodically, explicitly understand a theoretical plan/process and expose gaps.
based on a lot of articles/studies i've seen (admittedly haven't dug into them too deeply) it seems like the use of chatbots to perform this type of task actually has negative cognitive impacts on some groups of users - the opposite of the personal value i thought rubber-ducking was supposed to provide.
There is something that happens to our thought processes when we verbalise or write down our thoughts.
I like to think of it that instead of having seemingly endless amounts of half thoughts spinning around inside your head, you make an idea or thought more “fully formed” when you express it verbally or with written (or typed) words.
I believe this is part of why therapy can work, by actually expressing our thoughts, we’re kind of forced to face realities and after doing so it’s often much easier to reflect on it. Therapists often recommend personal journals as they can also work for this.
I believe rubber ducking works because in having to explain the problem, it forces you to actually gather your thoughts into something usable from which you can more effectively reflect on.
I see no reason why doing the same thing except in writing to an LLM couldn’t be equally effective.
Indeed the duck is supposed to sit there in silence while the speaker does the thinking ^^
This is what human language does though, isn't it? Evolves over time, in often weird ways; like how many people "could care less" about something they couldn't care less about.
Well OK, sure. But I’m having a “conversation” with nobody still. I’m surprised how often it happens that the AI a gives me a totally wrong answer but a combination of formulating the question and something in the answer made me think of the right thing after all.
All those things are true, but it's such a small part of my workflow at this point that the savings, while nice, aren't nearly as life-changing to my job as my CEO is forcing us to think it is.
Once AI can actually untangle our 14 year old codebase full of hosh-posh code, read every commit message, JIRA ticket, and Slack conversation related to the changes in full context, it's not going to solve a lot of the hard problems at my job.
Some of the “explain what it does” functionality is better than you might think but to be honest I find myself called on to work with unfamiliar tools all the time so I find plenty of value.
The issue is that it is slow and verbose, at least in its default configuration. The amount of reading is non trivial. There’s a reason most references are dense.
Those issues you can partly solve by changing the prompt to tell it to be concise and don't explain its code.
But nothing will make them stick to the one API version I use.
4 replies →
Well, compared to what method that would be faster to answer that kind of question?
2 replies →
Absolutely this. For a while I was working with a language I was only partially familiar with, and I'd say "here's how I would do this in [primary language], rewrite it in [new language]" and I'd get a decent piece of code back. A little searching in the project to make sure it was stylistically correct and then done.
Those kind of tasks are good for it, yeah. “Here’s some JSON. Please generate a Java class I can deserialize it into” is similar.
> and then you spend 80% of the time to get the rest of the 20% done
This was my pr-AI experience anyway, so getting that first chunk of time back is helpful.
Related: One of the better takes I've seen on AI from an experienced developer was, "90% of my skills just became worthless, and the other 10% just became 1,000 times more valuable." There's some hyperbole there, I but I like the gist.
It’s not funny when you find yourself redoing the first 80%, as the only way to complete the second 80%.
Let us know if that dev you're talking about winds up working 90% less for the same amount, or earning 1000x more
Otherwise he can shut the fuck up about being 1000x more valuable imo
100% agreed. It is all about removing friction for me. Case in point: I would not have touched React in my previous career without the assist that LLMs now provide. The barrier to entry just _felt_ to be too large and one always has the instinct to stick with what one knows.
However, it is _fun_ to go over the barrier if it is chatting with a model to get a quick tutorial and produce working code for a prototype (for your specific needs) where the understanding that you just developed is applied. The alternative (without LLMs) is to first do the ground work of learning via tutorials in text/video form and then do the cognitive mapping of applying the learning to one's prototype. I would make a lot of mistakes that expert/intermediate React developers don't make on this path.
One could argue that it shortcuts some learning and perhaps the old way results in better retention. But, our field changes so fast... and when it remains static for too long, projects die. I think of all this as accelerant for progress in adoption of new ways of thinking about software and diffusing that more quickly across the developer population globally. Code is always fungible, anyway. The job is about all the other things that one needs to do besides coding.
Agreed and +1 on "always feels like it is almost there" leading to time sink. AI is especially good at making you feel like it's doing something useful; it takes a lot of skill to discern the truth.
More sinister is it then takes 80% of the credit.
It works great on adding stuff to an already established codebase. Things like “we have these search parameters, also add foo”. Remove anything related to x…
Exactly. If you can give it a contract and a context, essentially, and it doesn't need to write a large amount of code to fulfill it, it can be great.
I just used it to write about 80 lines of new code like that, and there's no question it saves time.
As an old dev this is really all I want: a sort of autocorrect for my syntactical errors to save me a couple compile-edit cycles.
What I want is not autocorrect, because that won't teach me anything. I want it to yell at me loudly and point to the syntactical error.
Autocorrect is a scourge of humanity.
The problem is I then have to also figure out the code it wrote to be able to complete the final 20%. I have no momentum and am starting from almost scratch mentally.
This is just not true in my experience. Not with the latest models. I routinely manage to 1-shot a whole "thing." e.g. yesterday I needed a Wordpress plugin for a single-time use to clean up a friend's site. I described exactly what I needed, it produced the code, it ran perfect first time and the UI looked like a million dollars. It got me 100% of the way in 0% of the time.
I'm the biggest skeptic, but more and more I'm seeing it get me the bulk of the way with very little back-and-forth. If it was even more heavily integrated in my dev environment, it would save me even more time.