← Back to context

Comment by PaulHoule

1 day ago

It's kinda funny that my experience is the opposite but then again I'm senior and working at a different scale.

I don't think my LLM-assisted programming is much faster than my unassisted programming but I think I write higher quality code.

What I find is that my assistant sometimes sees a simple solution that I miss. Going back and forth with an assistant I am likely to think things through in more detail than I would otherwise. I like to load projects I depend on into IntelliJ IDEA and use Junie to have a conversation with the code that helps me get a more thorough and complete understanding of it than I would get looking at it myself more quickly.

As a pro maintenance programmer there is always a certain amount of "let sleeping dogs lie", if something works but you don't understand it you may decide to leave it alone. With an assistant I feel empowered to put more effort into really understanding things that I can get away without understanding, fix minor problems I wouldn't have fixed otherwise, etc.

One bit of advice is that as the context grows assistants seem to break bad. Often I ask Junie to write something, then give it some feedback about things it got wrong, and early in the session I'm blown away. You might think you should keep the session going so it will remember the history but actually it doesn't have the ability to realize some of the history is stale because it is about the way the code was before [1] and the result is eventually it starts going in loops. It makes sense then to start a new session and possibly feed it some documentation about the last session that tells it what it needs to know going forward.

[1] a huge problem with "the old AI", ordinary propositional logic sees the world from a "god's eye" point of view where everything happens at once, in our temporal world we need temporal if not bitemporal logic -- which is a whole can of worms. On top of that there is modal logic ("it is possible that X is true", "it is necessary that X is true") and modeling other people's belief "John thinks that Mary thinks that X is true") It's possible to create a logic which can solve specific problems but a general-purpose commonsense logic is still beyond state of the art.