Comment by teekert
3 days ago
I do a combi, sometimes even asking the LLM and starting a ddg search in parallel. It speeds me up. Sometimes the LLM is right, sometimes it's not. NP, I'll get it to work. One should never do anything that one does not understand, but I get to the understand faster as I can also ask more in depth follow up questions to the LLM.
For me LLM is just a rubber duck that talks back.
It is very stupid and is usually wrong in some meaningful way, but it can help break logjams in my thinking. Giving me clues that might be missing. Sort of like how writing gibberish is sometimes effective for writers to break writer's block.
It is also nice for generating boiler plate code for languages that I am not super familiar with.
The biggest problems I have with current state of the art LLMs is that errors compound. Meaning that I only really get somewhat useful answers when starting out with the first few questions or the first couple times I ask it to review some code. The longer the session lasts the more la-la land answers I get.
It is a game of odds. I expect that with systemd and quadlets it is going to particularly useless because there just isn't that many examples out there. It can only regurgitate what it is trained with so if something isn't widely used and checked into code bases it is trained on then it can't really do anything with it.
Which is why it is nice for a lot of common coding tasks, because a lot of code is just same thing tens of thousands people did before for only slightly different contexts and is mostly boilerplate.